modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
huggingartists/viktor-tsoi
huggingartists
2021-09-15T12:26:09Z
4
1
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/viktor-tsoi", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/viktor-tsoi tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/f9d03b2a6c45897724e74fab6a1aa86c.500x500x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Виктор Цой (Viktor Tsoi)</div> <a href="https://genius.com/artists/viktor-tsoi"> <div style="text-align: center; font-size: 14px;">@viktor-tsoi</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Виктор Цой (Viktor Tsoi). Dataset is available [here](https://huggingface.co/datasets/huggingartists/viktor-tsoi). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/viktor-tsoi") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1uufz4th/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Виктор Цой (Viktor Tsoi)'s lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/8mogk3d7) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/8mogk3d7/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/viktor-tsoi') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/viktor-tsoi") model = AutoModelWithLMHead.from_pretrained("huggingartists/viktor-tsoi") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/mikhail-gorshenev
huggingartists
2021-09-15T12:07:32Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/mikhail-gorshenev", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/mikhail-gorshenev tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/713c41590244f597dd6484bb61eacc5a.413x413x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Михаил Горшенев (Mikhail Gorshenev)</div> <a href="https://genius.com/artists/mikhail-gorshenev"> <div style="text-align: center; font-size: 14px;">@mikhail-gorshenev</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Михаил Горшенев (Mikhail Gorshenev). Dataset is available [here](https://huggingface.co/datasets/huggingartists/mikhail-gorshenev). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/mikhail-gorshenev") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3h9endcz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Михаил Горшенев (Mikhail Gorshenev)'s lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1kdp29bz) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1kdp29bz/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/mikhail-gorshenev') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/mikhail-gorshenev") model = AutoModelWithLMHead.from_pretrained("huggingartists/mikhail-gorshenev") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/till-lindemann
huggingartists
2021-09-15T11:46:53Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/till-lindemann", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/till-lindemann tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/48d6ca7ca17a9dfc9ad3034e71533a89.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Till Lindemann</div> <a href="https://genius.com/artists/till-lindemann"> <div style="text-align: center; font-size: 14px;">@till-lindemann</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Till Lindemann. Dataset is available [here](https://huggingface.co/datasets/huggingartists/till-lindemann). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/till-lindemann") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2xh6fyqt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Till Lindemann's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/32ohf092) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/32ohf092/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/till-lindemann') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/till-lindemann") model = AutoModelWithLMHead.from_pretrained("huggingartists/till-lindemann") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/face
huggingartists
2021-09-15T11:08:11Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/face", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/face tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/1dcb4e1dc4242207c27fe5cd0d4090e8.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">FACE</div> <a href="https://genius.com/artists/face"> <div style="text-align: center; font-size: 14px;">@face</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from FACE. Dataset is available [here](https://huggingface.co/datasets/huggingartists/face). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/face") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/xtozoqtm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on FACE's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/knkqp5iy) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/knkqp5iy/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/face') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/face") model = AutoModelWithLMHead.from_pretrained("huggingartists/face") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingtweets/lilnasx
huggingtweets
2021-09-14T23:54:26Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/lilnasx/1631663662799/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1430901239110258696/1P0QZ5_7_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">MONTERO 🦋</div> <div style="text-align: center; font-size: 14px;">@lilnasx</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from MONTERO 🦋. | Data | MONTERO 🦋 | | --- | --- | | Tweets downloaded | 3169 | | Retweets | 883 | | Short tweets | 796 | | Tweets kept | 1490 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/z4oke017/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lilnasx's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3flqsl4t) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3flqsl4t/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/lilnasx') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/cutebunnys50
huggingtweets
2021-09-14T23:47:15Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/cutebunnys50/1631663231129/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1385023548935258114/UuMXQpjI_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Bunny ✊🏽✊🏾✊🏿 🏳️‍🌈</div> <div style="text-align: center; font-size: 14px;">@cutebunnys50</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Bunny ✊🏽✊🏾✊🏿 🏳️‍🌈. | Data | Bunny ✊🏽✊🏾✊🏿 🏳️‍🌈 | | --- | --- | | Tweets downloaded | 3208 | | Retweets | 2575 | | Short tweets | 16 | | Tweets kept | 617 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2t0h4kcz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cutebunnys50's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ymfrlb8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ymfrlb8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/cutebunnys50') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/foodnetwork
huggingtweets
2021-09-14T23:41:32Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/foodnetwork/1631662887881/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1395089186538115072/oehHqb54_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Food Network</div> <div style="text-align: center; font-size: 14px;">@foodnetwork</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Food Network. | Data | Food Network | | --- | --- | | Tweets downloaded | 3237 | | Retweets | 938 | | Short tweets | 49 | | Tweets kept | 2250 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2x1lok4q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @foodnetwork's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2yjxdjcm) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2yjxdjcm/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/foodnetwork') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/fluffyguy
huggingtweets
2021-09-14T23:40:29Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/fluffyguy/1631662825404/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1346711262869086210/KPshm_gK_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">G a b r i e l - I g l e s i a s</div> <div style="text-align: center; font-size: 14px;">@fluffyguy</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from G a b r i e l - I g l e s i a s. | Data | G a b r i e l - I g l e s i a s | | --- | --- | | Tweets downloaded | 3246 | | Retweets | 264 | | Short tweets | 132 | | Tweets kept | 2850 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/24pz59rj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fluffyguy's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/36h0hs6l) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/36h0hs6l/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/fluffyguy') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/lizzo
huggingtweets
2021-09-14T23:39:31Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/lizzo/1631662767078/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1422227243498020865/sMYfk77e_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ALL THE RUMORS ARE TRUE</div> <div style="text-align: center; font-size: 14px;">@lizzo</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ALL THE RUMORS ARE TRUE. | Data | ALL THE RUMORS ARE TRUE | | --- | --- | | Tweets downloaded | 3095 | | Retweets | 1412 | | Short tweets | 420 | | Tweets kept | 1263 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1iacenbu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lizzo's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1erzu9fc) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1erzu9fc/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/lizzo') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
macedonizer/gr-gpt2
macedonizer
2021-09-14T16:07:35Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "gr", "dataset:wiki-gr", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - gr thumbnail: https://huggingface.co/macedonizer/gr-roberta-base/lets-talk-about-nlp-gr.jpg license: apache-2.0 datasets: - wiki-gr --- # gr-gpt2 Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). ## Model description gr-gpt2 is a transformers model pretrained on a very large corpus of Greek data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of the continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of a word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the Greek language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for, however, which is generating texts from a prompt. ### How to use Here is how to use this model to get the features of a given text in PyTorch: import random from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained('macedonizer/gr-gpt2') \\nnmodel = AutoModelWithLMHead.from_pretrained('macedonizer/gr-gpt2') input_text = 'Η Αθήνα είναι' if len(input_text) == 0: \ encoded_input = tokenizer(input_text, return_tensors="pt") \ output = model.generate( \ bos_token_id=random.randint(1, 50000), \ do_sample=True, \ top_k=50, \ max_length=1024, \ top_p=0.95, \ num_return_sequences=1, \ ) \ else: \ encoded_input = tokenizer(input_text, return_tensors="pt") \ output = model.generate( \ **encoded_input, \ bos_token_id=random.randint(1, 50000), \ do_sample=True, \ top_k=50, \ max_length=1024, \ top_p=0.95, \ num_return_sequences=1, \ ) decoded_output = [] \ for sample in output: \ decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True)) print(decoded_output)
CAMeL-Lab/bert-base-arabic-camelbert-mix
CAMeL-Lab
2021-09-14T14:34:32Z
3,211
15
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "Arabic", "Dialect", "Egyptian", "Gulf", "Levantine", "Classical Arabic", "MSA", "Modern Standard Arabic", "ar", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: - ar license: apache-2.0 tags: - Arabic - Dialect - Egyptian - Gulf - Levantine - Classical Arabic - MSA - Modern Standard Arabic widget: - text: "الهدف من الحياة هو [MASK] ." --- # CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks ## Model description **CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants. We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three. We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth). The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* This model card describes **CAMeLBERT-Mix** (`bert-base-arabic-camelbert-mix`), a model pre-trained on a mixture of these variants: MSA, DA, and CA. ||Model|Variant|Size|#Word| |-|-|:-:|-:|-:| |✔|`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B| ||`bert-base-arabic-camelbert-ca`|CA|6GB|847M| ||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B| ||`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B| ||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B| ||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B| ||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B| ||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M| ## Intended uses You can use the released model for either masked language modeling or next sentence prediction. However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT). #### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-mix') >>> unmasker("الهدف من الحياة هو [MASK] .") [{'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]', 'score': 0.10861027985811234, 'token': 6232, 'token_str': 'النجاح'}, {'sequence': '[CLS] الهدف من الحياة هو.. [SEP]', 'score': 0.07626965641975403, 'token': 18, 'token_str': '.'}, {'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]', 'score': 0.05131986364722252, 'token': 3696, 'token_str': 'الحياة'}, {'sequence': '[CLS] الهدف من الحياة هو الموت. [SEP]', 'score': 0.03734956309199333, 'token': 4295, 'token_str': 'الموت'}, {'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]', 'score': 0.027189988642930984, 'token': 2854, 'token_str': 'العمل'}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-mix') model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-mix') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-mix') model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-mix') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data - MSA (Modern Standard Arabic) - [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11) - [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus) - [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian) - [Arabic Wikipedia](https://archive.org/details/arwiki-20190201) - The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/) - DA (dialectal Arabic) - A collection of dialectal Arabic data described in [our paper](https://arxiv.org/abs/2103.06678). - CA (classical Arabic) - [OpenITI (Version 2020.1.2)](https://zenodo.org/record/3891466#.YEX4-F0zbzc) ## Training procedure We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training. We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. ### Preprocessing - After extracting the raw text from each corpus, we apply the following pre-processing. - We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297). - We also remove lines without any Arabic characters. - We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools). - Finally, we split each line into sentences with a heuristics-based sentence segmenter. - We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers). - We do not lowercase letters nor strip accents. ### Pre-training - The model was trained on a single cloud TPU (`v3-8`) for one million steps in total. - The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256. - The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. - We use whole word masking and a duplicate factor of 10. - We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens. - We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1. - The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results - We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. - We fine-tune and evaluate the models using 12 dataset. - We used Hugging Face's transformers to fine-tune our CAMeLBERT models. - We used transformers `v3.1.0` along with PyTorch `v1.5.1`. - The fine-tuning was done by adding a fully connected linear layer to the last hidden state. - We use \\(F_{1}\\) score as a metric for all tasks. - Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT). ### Results | Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% | | POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% | | | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% | | | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | | SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% | | | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% | | | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% | | DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% | | | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% | | | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% | | | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% | | Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | ### Results (Average) | | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% | | | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% | | | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | | Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% | <a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant. ## Acknowledgements This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-arabic-camelbert-msa-half
CAMeL-Lab
2021-09-14T14:34:06Z
7
2
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: - ar license: apache-2.0 widget: - text: "الهدف من الحياة هو [MASK] ." --- # CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks ## Model description **CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants. We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three. We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth). The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* This model card describes **CAMeLBERT-MSA-half** (`bert-base-arabic-camelbert-msa-half`), a model pre-trained on a half of the full MSA dataset. ||Model|Variant|Size|#Word| |-|-|:-:|-:|-:| ||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B| ||`bert-base-arabic-camelbert-ca`|CA|6GB|847M| ||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B| ||`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B| |✔|`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B| ||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B| ||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B| ||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M| ## Intended uses You can use the released model for either masked language modeling or next sentence prediction. However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT). #### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-half') >>> unmasker("الهدف من الحياة هو [MASK] .") [{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]', 'score': 0.09132730215787888, 'token': 3696, 'token_str': 'الحياة'}, {'sequence': '[CLS] الهدف من الحياة هو.. [SEP]', 'score': 0.08282623440027237, 'token': 18, 'token_str': '.'}, {'sequence': '[CLS] الهدف من الحياة هو البقاء. [SEP]', 'score': 0.04031957685947418, 'token': 9331, 'token_str': 'البقاء'}, {'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]', 'score': 0.032019514590501785, 'token': 6232, 'token_str': 'النجاح'}, {'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]', 'score': 0.028731243684887886, 'token': 3088, 'token_str': 'الحب'}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-half') model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-half') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-half') model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-half') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data - MSA (Modern Standard Arabic) - [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11) - [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus) - [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian) - [Arabic Wikipedia](https://archive.org/details/arwiki-20190201) - The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/) ## Training procedure We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training. We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. ### Preprocessing - After extracting the raw text from each corpus, we apply the following pre-processing. - We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297). - We also remove lines without any Arabic characters. - We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools). - Finally, we split each line into sentences with a heuristics-based sentence segmenter. - We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers). - We do not lowercase letters nor strip accents. ### Pre-training - The model was trained on a single cloud TPU (`v3-8`) for one million steps in total. - The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256. - The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. - We use whole word masking and a duplicate factor of 10. - We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens. - We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1. - The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results - We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. - We fine-tune and evaluate the models using 12 dataset. - We used Hugging Face's transformers to fine-tune our CAMeLBERT models. - We used transformers `v3.1.0` along with PyTorch `v1.5.1`. - The fine-tuning was done by adding a fully connected linear layer to the last hidden state. - We use \\(F_{1}\\) score as a metric for all tasks. - Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT). ### Results | Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% | | POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% | | | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% | | | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | | SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% | | | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% | | | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% | | DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% | | | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% | | | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% | | | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% | | Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | ### Results (Average) | | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% | | | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% | | | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | | Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% | <a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant. ## Acknowledgements This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth
CAMeL-Lab
2021-09-14T14:29:52Z
24
2
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: - ar license: apache-2.0 widget: - text: "الهدف من الحياة هو [MASK] ." --- # CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks ## Model description **CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants. We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three. We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth). The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* This model card describes **CAMeLBERT-MSA-eighth** (`bert-base-arabic-camelbert-msa-eighth`), a model pre-trained on an eighth of the full MSA dataset. ||Model|Variant|Size|#Word| |-|-|:-:|-:|-:| ||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B| ||`bert-base-arabic-camelbert-ca`|CA|6GB|847M| ||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B| ||`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B| ||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B| ||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B| |✔|`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B| ||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M| ## Intended uses You can use the released model for either masked language modeling or next sentence prediction. However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT). #### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth') >>> unmasker("الهدف من الحياة هو [MASK] .") [{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]', 'score': 0.057812128216028214, 'token': 3696, 'token_str': 'الحياة'}, {'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]', 'score': 0.05573025345802307, 'token': 6232, 'token_str': 'النجاح'}, {'sequence': '[CLS] الهدف من الحياة هو الكمال. [SEP]', 'score': 0.035942986607551575, 'token': 17188, 'token_str': 'الكمال'}, {'sequence': '[CLS] الهدف من الحياة هو التعلم. [SEP]', 'score': 0.03375256434082985, 'token': 12554, 'token_str': 'التعلم'}, {'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]', 'score': 0.030303971841931343, 'token': 2854, 'token_str': 'العمل'}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth') model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth') model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data - MSA (Modern Standard Arabic) - [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11) - [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus) - [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian) - [Arabic Wikipedia](https://archive.org/details/arwiki-20190201) - The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/) ## Training procedure We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training. We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. ### Preprocessing - After extracting the raw text from each corpus, we apply the following pre-processing. - We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297). - We also remove lines without any Arabic characters. - We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools). - Finally, we split each line into sentences with a heuristics-based sentence segmenter. - We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers). - We do not lowercase letters nor strip accents. ### Pre-training - The model was trained on a single cloud TPU (`v3-8`) for one million steps in total. - The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256. - The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. - We use whole word masking and a duplicate factor of 10. - We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens. - We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1. - The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results - We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. - We fine-tune and evaluate the models using 12 dataset. - We used Hugging Face's transformers to fine-tune our CAMeLBERT models. - We used transformers `v3.1.0` along with PyTorch `v1.5.1`. - The fine-tuning was done by adding a fully connected linear layer to the last hidden state. - We use \\(F_{1}\\) score as a metric for all tasks. - Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT). ### Results | Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% | | POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% | | | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% | | | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | | SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% | | | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% | | | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% | | DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% | | | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% | | | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% | | | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% | | Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | ### Results (Average) | | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% | | | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% | | | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | | Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% | <a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant. ## Acknowledgements This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-arabic-camelbert-ca
CAMeL-Lab
2021-09-14T14:27:12Z
914
12
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: - ar license: apache-2.0 widget: - text: "الهدف من الحياة هو [MASK] ." --- # CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks ## Model description **CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants. We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three. We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth). The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* This model card describes **CAMeLBERT-CA** (`bert-base-arabic-camelbert-ca`), a model pre-trained on the CA (classical Arabic) dataset. ||Model|Variant|Size|#Word| |-|-|:-:|-:|-:| ||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B| |✔|`bert-base-arabic-camelbert-ca`|CA|6GB|847M| ||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B| ||`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B| ||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B| ||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B| ||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B| ||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M| ## Intended uses You can use the released model for either masked language modeling or next sentence prediction. However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT). #### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-ca') >>> unmasker("الهدف من الحياة هو [MASK] .") [{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]', 'score': 0.11048116534948349, 'token': 3696, 'token_str': 'الحياة'}, {'sequence': '[CLS] الهدف من الحياة هو الإسلام. [SEP]', 'score': 0.03481195122003555, 'token': 4677, 'token_str': 'الإسلام'}, {'sequence': '[CLS] الهدف من الحياة هو الموت. [SEP]', 'score': 0.03402028977870941, 'token': 4295, 'token_str': 'الموت'}, {'sequence': '[CLS] الهدف من الحياة هو العلم. [SEP]', 'score': 0.027655426412820816, 'token': 2789, 'token_str': 'العلم'}, {'sequence': '[CLS] الهدف من الحياة هو هذا. [SEP]', 'score': 0.023059621453285217, 'token': 2085, 'token_str': 'هذا'}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-ca') model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-ca') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-ca') model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-ca') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data - CA (classical Arabic) - [OpenITI (Version 2020.1.2)](https://zenodo.org/record/3891466#.YEX4-F0zbzc) ## Training procedure We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training. We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. ### Preprocessing - After extracting the raw text from each corpus, we apply the following pre-processing. - We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297). - We also remove lines without any Arabic characters. - We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools). - Finally, we split each line into sentences with a heuristics-based sentence segmenter. - We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers). - We do not lowercase letters nor strip accents. ### Pre-training - The model was trained on a single cloud TPU (`v3-8`) for one million steps in total. - The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256. - The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. - We use whole word masking and a duplicate factor of 10. - We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens. - We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1. - The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results - We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. - We fine-tune and evaluate the models using 12 dataset. - We used Hugging Face's transformers to fine-tune our CAMeLBERT models. - We used transformers `v3.1.0` along with PyTorch `v1.5.1`. - The fine-tuning was done by adding a fully connected linear layer to the last hidden state. - We use \\(F_{1}\\) score as a metric for all tasks. - Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT). ### Results | Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% | | POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% | | | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% | | | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | | SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% | | | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% | | | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% | | DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% | | | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% | | | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% | | | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% | | Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | ### Results (Average) | | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% | | | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% | | | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | | Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% | <a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant. ## Acknowledgements This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
huggingtweets/4by3animetits
huggingtweets
2021-09-14T06:15:43Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/4by3animetits/1631600106043/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1437436917201637376/YMXf838Y_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Numb</div> <div style="text-align: center; font-size: 14px;">@4by3animetits</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Numb. | Data | Numb | | --- | --- | | Tweets downloaded | 3206 | | Retweets | 1497 | | Short tweets | 491 | | Tweets kept | 1218 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3pdw5mgr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @4by3animetits's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/5yrdnbzr) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/5yrdnbzr/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/4by3animetits') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/jamescharles-loganpaul-tanamongeau
huggingtweets
2021-09-14T05:53:11Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/jamescharles-loganpaul-tanamongeau/1631598787303/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1420806762408464385/10y3M0iO_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1324782032124215296/HMG6-q8g_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1401837042934468611/okzqIoMb_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">CANCELLED & James Charles & Logan Paul</div> <div style="text-align: center; font-size: 14px;">@jamescharles-loganpaul-tanamongeau</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from CANCELLED & James Charles & Logan Paul. | Data | CANCELLED | James Charles | Logan Paul | | --- | --- | --- | --- | | Tweets downloaded | 3167 | 3182 | 3246 | | Retweets | 938 | 480 | 98 | | Short tweets | 522 | 496 | 287 | | Tweets kept | 1707 | 2206 | 2861 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2avr905u/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jamescharles-loganpaul-tanamongeau's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2at101p1) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2at101p1/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/jamescharles-loganpaul-tanamongeau') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
gagan3012/distilbert-base-uncased-finetuned-ner
gagan3012
2021-09-14T00:36:53Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9274238227146815 - name: Recall type: recall value: 0.9363463474661595 - name: F1 type: f1 value: 0.9318637274549098 - name: Accuracy type: accuracy value: 0.9839865283492462 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0614 - Precision: 0.9274 - Recall: 0.9363 - F1: 0.9319 - Accuracy: 0.9840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2403 | 1.0 | 878 | 0.0701 | 0.9101 | 0.9202 | 0.9151 | 0.9805 | | 0.0508 | 2.0 | 1756 | 0.0600 | 0.9220 | 0.9350 | 0.9285 | 0.9833 | | 0.0301 | 3.0 | 2634 | 0.0614 | 0.9274 | 0.9363 | 0.9319 | 0.9840 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.0 - Tokenizers 0.10.3
dbmdz/bert-base-french-europeana-cased
dbmdz
2021-09-13T21:03:24Z
44,865
4
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "historic french", "fr", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: fr license: mit tags: - "historic french" --- # 🤗 + 📚 dbmdz BERT model In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources French Europeana BERT models 🎉 # French Europeana BERT We extracted all French texts using the `language` metadata attribute from the Europeana corpus. The resulting corpus has a size of 63GB and consists of 11,052,528,456 tokens. Based on the metadata information, texts from the 18th - 20th century are mainly included in the training corpus. Detailed information about the data and pretraining steps can be found in [this repository](https://github.com/stefan-it/europeana-bert). ## Model weights BERT model weights for PyTorch and TensorFlow are available. * French Europeana BERT: `dbmdz/bert-base-french-europeana-cased` - [model hub page](https://huggingface.co/dbmdz/bert-base-french-europeana-cased/tree/main) ## Results For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert). ## Usage With Transformers >= 2.3 our French Europeana BERT model can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-french-europeana-cased") model = AutoModel.from_pretrained("dbmdz/bert-base-french-europeana-cased") ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT model just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download our model from their S3 storage 🤗
blizrys/biobert-v1.1-finetuned-pubmedqa
blizrys
2021-09-13T17:56:32Z
35,033
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - null metrics: - accuracy model-index: - name: biobert-v1.1-finetuned-pubmedqa results: - task: name: Text Classification type: text-classification metrics: - name: Accuracy type: accuracy value: 0.7 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert-v1.1-finetuned-pubmedqa This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7737 - Accuracy: 0.7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 57 | 0.8810 | 0.56 | | No log | 2.0 | 114 | 0.8139 | 0.62 | | No log | 3.0 | 171 | 0.7963 | 0.68 | | No log | 4.0 | 228 | 0.7709 | 0.66 | | No log | 5.0 | 285 | 0.7931 | 0.64 | | No log | 6.0 | 342 | 0.7420 | 0.7 | | No log | 7.0 | 399 | 0.7654 | 0.7 | | No log | 8.0 | 456 | 0.7756 | 0.68 | | 0.5849 | 9.0 | 513 | 0.7605 | 0.68 | | 0.5849 | 10.0 | 570 | 0.7737 | 0.7 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
Narrativa/distilroberta-finetuned-stereotype-detection
Narrativa
2021-09-13T14:52:21Z
7
4
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "stereotype", "gender", "gender_bias", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer - stereotype - gender - gender_bias widget: - text: "Cauterize is not just for fans of the guitarist or his other projects, but those that love music that is both aggressive and infectious and gave the album 4 out of 5 stars ." metrics: - accuracy model-index: - name: distilRoberta-stereotype results: - task: name: Text Classification type: text-classification metrics: - name: Accuracy type: accuracy value: 0.989151002901476 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilRoberta-stereotype This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0651 - Accuracy: 0.9892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0783 | 1.0 | 5615 | 0.0703 | 0.9847 | | 0.0468 | 2.0 | 11230 | 0.0573 | 0.9863 | | 0.0316 | 3.0 | 16845 | 0.0580 | 0.9882 | | 0.0172 | 4.0 | 22460 | 0.0591 | 0.9885 | | 0.0098 | 5.0 | 28075 | 0.0651 | 0.9892 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3 Created by: [Narrativa](https://www.narrativa.com/) About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
shashank2123/t5-base-fine-tuned-for-Punctuation-Restoration
shashank2123
2021-09-13T14:42:51Z
34
1
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-base-fine-tuned-for-Punctuation-Restoration results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-fine-tuned-for-Punctuation-Restoration This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1097 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1796 | 1.0 | 1431 | 0.1097 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
blizrys/biobert-v1.1-finetuned-pubmedqa-adapter
blizrys
2021-09-13T13:49:08Z
0
0
null
[ "tensorboard", "generated_from_trainer", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - null metrics: - accuracy model_index: - name: biobert-v1.1-finetuned-pubmedqa-adapter results: - task: name: Text Classification type: text-classification metric: name: Accuracy type: accuracy value: 0.48 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert-v1.1-finetuned-pubmedqa-adapter This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0910 - Accuracy: 0.48 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 57 | 0.9848 | 0.58 | | No log | 2.0 | 114 | 0.8537 | 0.58 | | No log | 3.0 | 171 | 0.9565 | 0.42 | | No log | 4.0 | 228 | 0.9659 | 0.56 | | No log | 5.0 | 285 | 0.9763 | 0.6 | | No log | 6.0 | 342 | 1.0647 | 0.66 | | No log | 7.0 | 399 | 1.4305 | 0.6 | | No log | 8.0 | 456 | 2.0545 | 0.56 | | 0.6957 | 9.0 | 513 | 2.2438 | 0.5 | | 0.6957 | 10.0 | 570 | 2.0910 | 0.48 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
nielsr/beit-base-patch16-224
nielsr
2021-09-13T13:36:43Z
73
0
transformers
[ "transformers", "pytorch", "jax", "beit", "image-classification", "dataset:imagenet", "dataset:imagenet-21k", "arxiv:2106.08254", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - image-classification datasets: - imagenet - imagenet-21k --- # BEiT (base-sized model, fine-tuned on ImageNet-1k after being intermediately fine-tuned on ImageNet-22k) BEiT (BERT pre-training of Image Transformers) model pre-trained in a self-supervised way on ImageNet-22k (14 million images, 21,841 classes) at resolution 224x224, and also fine-tuned on the same dataset at the same resolution. It was introduced in the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit). Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team.
SIKU-BERT/sikubert
SIKU-BERT
2021-09-13T13:34:40Z
419
10
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "chinese", "classical chinese", "literary chinese", "ancient chinese", "roberta", "zh", "license:apache-2.0", "autotrain_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: - "zh" thumbnail: "https://raw.githubusercontent.com/SIKU-BERT/SikuBERT/main/appendix/sikubert.png" tags: - "chinese" - "classical chinese" - "literary chinese" - "ancient chinese" - "bert" - "roberta" - "pytorch" inference: false license: "apache-2.0" --- # SikuBERT ## Model description ![SikuBERT](https://raw.githubusercontent.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing/main/appendix/sikubert.png) Digital humanities research needs the support of large-scale corpus and high-performance ancient Chinese natural language processing tools. The pre-training language model has greatly improved the accuracy of text mining in English and modern Chinese texts. At present, there is an urgent need for a pre-training model specifically for the automatic processing of ancient texts. We used the verified high-quality “Siku Quanshu” full-text corpus as the training set, based on the BERT deep language model architecture, we constructed the SikuBERT and SikuRoBERTa pre-training language models for intelligent processing tasks of ancient Chinese. ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("SIKU-BERT/sikubert") model = AutoModel.from_pretrained("SIKU-BERT/sikubert") ``` ## About Us We are from Nanjing Agricultural University. > Created with by SIKU-BERT [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing)
Gregor/xlm-roberta-base-wmt21-qe
Gregor
2021-09-13T11:30:27Z
1
0
adapter-transformers
[ "adapter-transformers", "adapterhub:quality_estimation/wmt21", "xlm-roberta", "region:us" ]
null
2022-03-02T23:29:04Z
--- tags: - adapter-transformers - adapterhub:quality_estimation/wmt21 - xlm-roberta --- # Adapter `Gregor/xlm-roberta-base-wmt21-qe` for xlm-roberta-base An [adapter](https://adapterhub.ml) for the xlm-roberta-base model that was trained on the [quality_estimation/wmt21](https://adapterhub.ml/explore/quality_estimation/wmt21/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("xlm-roberta-base") adapter_name = model.load_adapter("Gregor/xlm-roberta-base-wmt21-qe") model.active_adapters = adapter_name ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
Gregor/bert-base-multilingual-cased-wmt21-qe
Gregor
2021-09-13T11:20:52Z
3
0
adapter-transformers
[ "adapter-transformers", "adapterhub:quality_estimation/wmt21", "bert", "region:us" ]
null
2022-03-02T23:29:04Z
--- tags: - adapter-transformers - adapterhub:quality_estimation/wmt21 - bert --- # Adapter `Gregor/bert-base-multilingual-cased-wmt21-qe` for bert-base-multilingual-cased An [adapter](https://adapterhub.ml) for the bert-base-multilingual-cased model that was trained on the [quality_estimation/wmt21](https://adapterhub.ml/explore/quality_estimation/wmt21/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-multilingual-cased") adapter_name = model.load_adapter("Gregor/bert-base-multilingual-cased-wmt21-qe") model.active_adapters = adapter_name ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
huggingtweets/lux_capital
huggingtweets
2021-09-13T09:48:37Z
3
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/lux_capital/1631526513457/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/728194457632395264/rwtxA-v4_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Lux Capital</div> <div style="text-align: center; font-size: 14px;">@lux_capital</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Lux Capital. | Data | Lux Capital | | --- | --- | | Tweets downloaded | 2329 | | Retweets | 597 | | Short tweets | 22 | | Tweets kept | 1710 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3khqan1v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lux_capital's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1gfkbn7u) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1gfkbn7u/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/lux_capital') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
eugenesiow/edsr
eugenesiow
2021-09-13T03:46:42Z
4,608
4
transformers
[ "transformers", "EDSR", "super-image", "image-super-resolution", "dataset:eugenesiow/Div2k", "dataset:eugenesiow/Set5", "dataset:eugenesiow/Set14", "dataset:eugenesiow/BSD100", "dataset:eugenesiow/Urban100", "arxiv:1707.02921", "arxiv:2104.07566", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - super-image - image-super-resolution datasets: - eugenesiow/Div2k - eugenesiow/Set5 - eugenesiow/Set14 - eugenesiow/BSD100 - eugenesiow/Urban100 metrics: - pnsr - ssim --- # Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR) EDSR model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Enhanced Deep Residual Networks for Single Image Super-Resolution](https://arxiv.org/abs/1707.02921) by Lim et al. (2017) and first released in [this repository](https://github.com/sanghyun-son/EDSR-PyTorch). The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling x2 and EDSR upscaling x2. ![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4](images/edsr_4_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4") ## Model description EDSR is a model that uses both deeper and wider architecture (32 ResBlocks and 256 channels) to improve performance. It uses both global and local skip connections, and up-scaling is done at the end of the network. It doesn't use batch normalization layers (input and output have similar distributions, normalizing intermediate features may not be desirable) instead it uses constant scaling layers to ensure stable training. An L1 loss function (absolute error) is used instead of L2 (MSE), the authors showed better performance empirically and it requires less computation. This is a base model (~5mb vs ~100mb) that includes just 16 ResBlocks and 64 channels. ## Intended uses & limitations You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset. ### How to use The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library: ```bash pip install super-image ``` Here is how to use a pre-trained model to upscale your image: ```python from super_image import EdsrModel, ImageLoader from PIL import Image import requests url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg' image = Image.open(requests.get(url, stream=True).raw) model = EdsrModel.from_pretrained('eugenesiow/edsr', scale=2) # scale 2, 3 and 4 models available inputs = ImageLoader.load_image(image) preds = model(inputs) ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png` ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab") ## Training data The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900). ## Training procedure ### Preprocessing We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566). Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times. During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches. Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image. We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data: ```bash pip install datasets ``` The following code gets the data and preprocesses/augments the data. ```python from datasets import load_dataset from super_image.data import EvalDataset, TrainDataset, augment_five_crop augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\ .map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader ``` ### Pretraining The model was trained on GPU. The training code is provided below: ```python from super_image import Trainer, TrainingArguments, EdsrModel, EdsrConfig training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=1000, # total number of training epochs ) config = EdsrConfig( scale=4, # train a model to upscale 4x ) model = EdsrModel(config) trainer = Trainer( model=model, # the instantiated model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=eval_dataset # evaluation dataset ) trainer.train() ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab") ## Evaluation results The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm). Evaluation datasets include: - Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5) - Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14) - BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100) - Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100) The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline. |Dataset |Scale |Bicubic |edsr | |--- |--- |--- |--- | |Set5 |2x |33.64/0.9292 |**38.19/0.9612** | |Set5 |3x |30.39/0.8678 |**35.31/0.9421** | |Set5 |4x |28.42/0.8101 |**32.5/0.8986** | |Set14 |2x |30.22/0.8683 |**33.99/0.9215** | |Set14 |3x |27.53/0.7737 |**31.18/0.862** | |Set14 |4x |25.99/0.7023 |**28.92/0.7899** | |BSD100 |2x |29.55/0.8425 |**33.89/0.9266** | |BSD100 |3x |27.20/0.7382 |**29.77/0.8224** | |BSD100 |4x |25.96/0.6672 |**28.62/0.7689** | |Urban100 |2x |26.66/0.8408 |**32.68/0.9331** | |Urban100 |3x | |**29.75/0.8825** | |Urban100 |4x |23.14/0.6573 |**26.53/0.7995** | ![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2](images/edsr_2_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2") You can find a notebook to easily run evaluation on pretrained models below: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab") ## BibTeX entry and citation info ```bibtex @InProceedings{Lim_2017_CVPR_Workshops, author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Lee, Kyoung Mu}, title = {Enhanced Deep Residual Networks for Single Image Super-Resolution}, booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {July}, year = {2017} } ```
huggingartists/peter-paul-and-mary
huggingartists
2021-09-13T00:35:13Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/peter-paul-and-mary", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/peter-paul-and-mary tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/02fe78bca7c47dc6869673e7552c7978.500x338x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Peter, Paul and Mary</div> <a href="https://genius.com/artists/peter-paul-and-mary"> <div style="text-align: center; font-size: 14px;">@peter-paul-and-mary</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Peter, Paul and Mary. Dataset is available [here](https://huggingface.co/datasets/huggingartists/peter-paul-and-mary). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/peter-paul-and-mary") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/svwa6bev/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Peter, Paul and Mary's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1s4mkr9x) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1s4mkr9x/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/peter-paul-and-mary') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/peter-paul-and-mary") model = AutoModelWithLMHead.from_pretrained("huggingartists/peter-paul-and-mary") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/tony-raut-and-garry-topor
huggingartists
2021-09-12T20:04:03Z
6
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/tony-raut-and-garry-topor", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/tony-raut-and-garry-topor tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/7249d6785a5c87095850bd4048595e08.989x989x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Тони Раут (Tony Raut) & Гарри Топор (Garry Topor)</div> <a href="https://genius.com/artists/tony-raut-and-garry-topor"> <div style="text-align: center; font-size: 14px;">@tony-raut-and-garry-topor</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Тони Раут (Tony Raut) & Гарри Топор (Garry Topor). Dataset is available [here](https://huggingface.co/datasets/huggingartists/tony-raut-and-garry-topor). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/tony-raut-and-garry-topor") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/xnzxet17/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Тони Раут (Tony Raut) & Гарри Топор (Garry Topor)'s lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/tfby1rj2) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/tfby1rj2/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/tony-raut-and-garry-topor') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/tony-raut-and-garry-topor") model = AutoModelWithLMHead.from_pretrained("huggingartists/tony-raut-and-garry-topor") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
blizrys/biobert-base-cased-v1.1-finetuned-pubmedqa
blizrys
2021-09-12T15:54:21Z
24
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - null metrics: - accuracy model-index: - name: biobert-base-cased-v1.1-finetuned-pubmedqa results: - task: name: Text Classification type: text-classification metrics: - name: Accuracy type: accuracy value: 0.5 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert-base-cased-v1.1-finetuned-pubmedqa This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3182 - Accuracy: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 57 | 0.8591 | 0.58 | | No log | 2.0 | 114 | 0.9120 | 0.58 | | No log | 3.0 | 171 | 0.8159 | 0.62 | | No log | 4.0 | 228 | 1.1651 | 0.54 | | No log | 5.0 | 285 | 1.2350 | 0.6 | | No log | 6.0 | 342 | 1.5563 | 0.68 | | No log | 7.0 | 399 | 2.0233 | 0.58 | | No log | 8.0 | 456 | 2.2054 | 0.5 | | 0.4463 | 9.0 | 513 | 2.2434 | 0.5 | | 0.4463 | 10.0 | 570 | 2.3182 | 0.5 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
huggingartists/aikko
huggingartists
2021-09-12T14:10:49Z
6
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/aikko", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/aikko tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/a1a40316d1405fa83df2a21923d64168.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">⁣aikko</div> <a href="https://genius.com/artists/aikko"> <div style="text-align: center; font-size: 14px;">@aikko</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from ⁣aikko. Dataset is available [here](https://huggingface.co/datasets/huggingartists/aikko). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/aikko") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1cfdpsrg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on ⁣aikko's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/oesyn53g) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/oesyn53g/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/aikko') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/aikko") model = AutoModelWithLMHead.from_pretrained("huggingartists/aikko") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/5nizza
huggingartists
2021-09-12T12:34:26Z
7
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/5nizza", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/5nizza tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/289ded19d51d41798be99217d6059eb3.458x458x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">5’Nizza</div> <a href="https://genius.com/artists/5nizza"> <div style="text-align: center; font-size: 14px;">@5nizza</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from 5’Nizza. Dataset is available [here](https://huggingface.co/datasets/huggingartists/5nizza). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/5nizza") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1zcp1grf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on 5’Nizza's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2zg6pzw7) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2zg6pzw7/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/5nizza') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/5nizza") model = AutoModelWithLMHead.from_pretrained("huggingartists/5nizza") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/lyapis-trubetskoy
huggingartists
2021-09-12T12:17:52Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/lyapis-trubetskoy", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/lyapis-trubetskoy tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/452918252959798bad82762cda0dc2d7.340x340x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Ляпис Трубецкой (Lyapis Trubetskoy)</div> <a href="https://genius.com/artists/lyapis-trubetskoy"> <div style="text-align: center; font-size: 14px;">@lyapis-trubetskoy</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Ляпис Трубецкой (Lyapis Trubetskoy). Dataset is available [here](https://huggingface.co/datasets/huggingartists/lyapis-trubetskoy). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/lyapis-trubetskoy") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1ycs0usm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Ляпис Трубецкой (Lyapis Trubetskoy)'s lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/uz1xtq0k) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/uz1xtq0k/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/lyapis-trubetskoy') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/lyapis-trubetskoy") model = AutoModelWithLMHead.from_pretrained("huggingartists/lyapis-trubetskoy") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
pritoms/gpt-neo-125M-Byethon
pritoms
2021-09-12T11:14:38Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt_neo", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - null model-index: - name: gpt-neo-125M-Byethon results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt-neo-125M-Byethon This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 237 | 0.8348 | | No log | 2.0 | 474 | 0.6931 | | 0.8151 | 3.0 | 711 | 0.6609 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
huggingtweets/eb_txt
huggingtweets
2021-09-12T09:03:48Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/eb_txt/1631437320065/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/954020088113516544/zVvwNoLj_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">earthbound.txt</div> <div style="text-align: center; font-size: 14px;">@eb_txt</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from earthbound.txt. | Data | earthbound.txt | | --- | --- | | Tweets downloaded | 3224 | | Retweets | 0 | | Short tweets | 57 | | Tweets kept | 3167 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1mrg6xog/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @eb_txt's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6w53hei7) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6w53hei7/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/eb_txt') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
tarikul/distilbert-base-uncased-finetuned-squad
tarikul
2021-09-12T07:19:38Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-squad results: - task: name: Question Answering type: question-answering --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Tokenizers 0.10.3
huggingartists/loud-luxury
huggingartists
2021-09-12T03:29:59Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/loud-luxury", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/loud-luxury tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/6aa21ea8658908051e15b8d7808b5196.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Loud Luxury</div> <a href="https://genius.com/artists/loud-luxury"> <div style="text-align: center; font-size: 14px;">@loud-luxury</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Loud Luxury. Dataset is available [here](https://huggingface.co/datasets/huggingartists/loud-luxury). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/loud-luxury") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2a6kq74a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Loud Luxury's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2l3op3mf) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2l3op3mf/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/loud-luxury') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/loud-luxury") model = AutoModelWithLMHead.from_pretrained("huggingartists/loud-luxury") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/post-malone
huggingartists
2021-09-12T03:17:01Z
6
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/post-malone", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/post-malone tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/1010194fa644be099aa2d1329de0b230.448x448x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Post Malone</div> <a href="https://genius.com/artists/post-malone"> <div style="text-align: center; font-size: 14px;">@post-malone</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Post Malone. Dataset is available [here](https://huggingface.co/datasets/huggingartists/post-malone). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/post-malone") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/5ig21wpy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Post Malone's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2ih9ntzv) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2ih9ntzv/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/post-malone') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/post-malone") model = AutoModelWithLMHead.from_pretrained("huggingartists/post-malone") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/armin-van-buuren
huggingartists
2021-09-12T03:06:42Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/armin-van-buuren", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/armin-van-buuren tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/b1a35069a1a44927425ef26c0bbda4a4.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Armin van Buuren</div> <a href="https://genius.com/artists/armin-van-buuren"> <div style="text-align: center; font-size: 14px;">@armin-van-buuren</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Armin van Buuren. Dataset is available [here](https://huggingface.co/datasets/huggingartists/armin-van-buuren). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/armin-van-buuren") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/hrrfc55y/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Armin van Buuren's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3q93rwo8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3q93rwo8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/armin-van-buuren') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/armin-van-buuren") model = AutoModelWithLMHead.from_pretrained("huggingartists/armin-van-buuren") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
secometo/mt5-base-turkish-question-paraphrase-generator
secometo
2021-09-11T17:14:52Z
22
5
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# Turkish-question-paraphrase-generator mT5 based pre-trained model to generate question paraphrases in Turkish language. ## Acknowledgement In this project, which we undertook as an BLM3010 Computer Project of Yildiz Technical University, our goal was to conduct research on Turkish in area that has not been studied much. In this process, we compared the models trained with different algorithms. Developed a dataset and shared it by writing article for our model. We would like to thank our mentor teacher <a href="https://github.com/mfatihamasyali">Mehmet Fatih Amasyali</a> who has always supported us on this path. ## Usage ```python import transformers, sentencepiece from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("secometo/mt5-base-turkish-question-paraphrase-generator") model = AutoModelForSeq2SeqLM.from_pretrained("secometo/mt5-base-turkish-question-paraphrase-generator") original_sentence = tokenizer.encode_plus("Ülke genelinde bisiklet kullanımının artması hakkında ne düşünüyorsun?", return_tensors='pt') paraphrased_sentences = model.generate(original_sentence['input_ids'], max_length=150, num_return_sequences=5, num_beams=5) tokenizer.batch_decode(paraphrased_sentences, skip_special_tokens=True) ``` ## Input ``` Ülke genelinde bisiklet kullanımının artması hakkında ne düşünüyorsun? ``` ## Outputs ``` ['ülke genelinde bisiklet kullanımının artması ile ilgili düşünceniz nedir?', 'ülke genelinde bisiklet kullanımının artması hakkında düşünceniz nedir?', 'ülke genelinde bisiklet kullanımının artması için ne düşünüyorsunuz?', 'ülke genelinde bisiklet kullanımının artması hakkında ne düşünüyorsunuz?', 'ülke genelinde bisiklet kullanımının artması hakkında fikirleriniz nedir?'] ``` ## Dataset We used 50994 question sentence pairs, which are created manually, to train our model. The dataset is provided our mentor. Sentences were extracted from the titles of topics in popular Turkish forum website donanimhaber.com. We augmented the dataset by writing ten thousand sentences per person. ## Authors & Citation <a href="https://github.com/metinbinbir">Metin Binbir</a></br> <a href="https://github.com/sercaksoy">Sercan Aksoy</a> ``` Metin Binbir, Sercan Aksoy, Paraphrase Generation for Turkish Language, Computer Project, Advisor: Mehmet Fatih Amasyali, Yildiz Technical University, Computer Engineering Dept. , Istanbul, Turkey , 2021. ```
huggingartists/ed-sheeran
huggingartists
2021-09-11T15:43:35Z
4
1
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/ed-sheeran", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/ed-sheeran tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/b501daeff73d1b17610f47a5668f690a.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Ed Sheeran</div> <a href="https://genius.com/artists/ed-sheeran"> <div style="text-align: center; font-size: 14px;">@ed-sheeran</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Ed Sheeran. Dataset is available [here](https://huggingface.co/datasets/huggingartists/ed-sheeran). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/ed-sheeran") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3nju68bo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Ed Sheeran's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3hu7zc76) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3hu7zc76/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/ed-sheeran') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/ed-sheeran") model = AutoModelWithLMHead.from_pretrained("huggingartists/ed-sheeran") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/aimer
huggingartists
2021-09-11T13:43:46Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/aimer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/aimer tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/123a0b2ef09a25207b610c5bd7b21d0f.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Aimer</div> <a href="https://genius.com/artists/aimer"> <div style="text-align: center; font-size: 14px;">@aimer</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Aimer. Dataset is available [here](https://huggingface.co/datasets/huggingartists/aimer). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/aimer") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1rtjxc8q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Aimer's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2rguugmg) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2rguugmg/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/aimer') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/aimer") model = AutoModelWithLMHead.from_pretrained("huggingartists/aimer") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/imagine-dragons
huggingartists
2021-09-11T13:36:33Z
6
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/imagine-dragons", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/imagine-dragons tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/ec1df125fd46ec3ef56f228df021a8cd.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Imagine Dragons</div> <a href="https://genius.com/artists/imagine-dragons"> <div style="text-align: center; font-size: 14px;">@imagine-dragons</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Imagine Dragons. Dataset is available [here](https://huggingface.co/datasets/huggingartists/imagine-dragons). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/imagine-dragons") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/dln6ixis/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Imagine Dragons's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3cj3c8z1) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3cj3c8z1/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/imagine-dragons') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/imagine-dragons") model = AutoModelWithLMHead.from_pretrained("huggingartists/imagine-dragons") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/sergei-letov
huggingartists
2021-09-11T12:13:08Z
3
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/sergei-letov", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/sergei-letov tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/a5717aec4301e2adfb464d3b85701f74.300x300x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Сергей Летов (Sergei Letov)</div> <a href="https://genius.com/artists/sergei-letov"> <div style="text-align: center; font-size: 14px;">@sergei-letov</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Сергей Летов (Sergei Letov). Dataset is available [here](https://huggingface.co/datasets/huggingartists/sergei-letov). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/sergei-letov") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1chw67j7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Сергей Летов (Sergei Letov)'s lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/my7m2jp6) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/my7m2jp6/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/sergei-letov') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/sergei-letov") model = AutoModelWithLMHead.from_pretrained("huggingartists/sergei-letov") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
airesearch/wangchanberta-base-wiki-20210520-spm-finetune-qa
airesearch
2021-09-11T09:28:19Z
111
0
transformers
[ "transformers", "pytorch", "camembert", "question-answering", "th", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: th widget: - text: "สวนกุหลาบเป็นโรงเรียนอะไร" context: "โรงเรียนสวนกุหลาบวิทยาลัย (Suankularb Wittayalai School) (อักษรย่อ : ส.ก. / S.K.) เป็นโรงเรียนชายล้วน ระดับชั้นมัธยมศึกษาขนาดใหญ่พิเศษ สังกัดสำนักงานเขตพื้นที่การศึกษามัธยมศึกษาเขต 1 สำนักงานคณะกรรมการการศึกษาขั้นพื้นฐาน (ชื่อเดิม: กรมสามัญศึกษา) กระทรวงศึกษาธิการ ก่อตั้งโดย พระบาทสมเด็จพระจุลจอมเกล้าเจ้าอยู่หัว ได้รับการสถาปนาขึ้นในวันที่ 8 มีนาคม พ.ศ. 2424 (ขณะนั้นนับวันที่ 1 เมษายน เป็นวันขึ้นปีใหม่ เมื่อนับอย่างสากลถือเป็น พ.ศ. 2425) โดยเป็นโรงเรียนรัฐบาลแห่งแรกของประเทศไทย" --- # wangchanberta-base-wiki-20210520-spm-finetune-qa Finetuning `airesearchth/wangchanberta-base-wiki-20210520-spmd` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`. Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py). Run with: ``` export MODEL_NAME=airesearchth/wangchanberta-base-wiki-20210520-news-spm CUDA_LAUNCH_BLOCKING=1 python train_question_answering_lm_finetuning.py \\n --model_name $MODEL_NAME \\n --dataset_name chimera_qa \\n --output_dir $MODEL_NAME-finetune-chimera_qa-model \\n --log_dir $MODEL_NAME-finetune-chimera_qa-log \\n --model_max_length 400 \\n --pad_on_right \\n --fp16 ```
mys/distilbert-base-turkish-cased-clip
mys
2021-09-11T08:56:55Z
118
1
transformers
[ "transformers", "tf", "distilbert", "feature-extraction", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
## Acknowledgement Google supported this work by providing Google Cloud credit. Thank you Google for supporting the open source! 🎉 ## What is this? This model is a finetuned version of [dbmdz/distilbert-base-turkish-cased](https://huggingface.co/dbmdz/distilbert-base-turkish-cased) to be used as a text encoder in Turkish with [CLIP](https://github.com/openai/CLIP)'s `ViT-B/32` image encoder. It should be used with `clip_head.h5` from [my accompanying repo on GitHub]. Go to that repo for fully working example, and a simple usage example is as follows: ```python from transformers import AutoTokenizer, TFAutoModel import tensorflow as tf import numpy as np from PIL import Image import torch import clip model_name = "mys/distilbert-base-turkish-cased-clip" base_model = TFAutoModel.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) head_model = tf.keras.models.load_model("./clip_head.h5") def encode_text(base_model, tokenizer, head_model, texts): tokens = tokenizer(texts, padding=True, return_tensors='tf') embs = base_model(**tokens)[0] attention_masks = tf.cast(tokens['attention_mask'], tf.float32) sample_length = tf.reduce_sum(attention_masks, axis=-1, keepdims=True) masked_embs = embs * tf.expand_dims(attention_masks, axis=-1) base_embs = tf.reduce_sum(masked_embs, axis=1) / tf.cast(sample_length, tf.float32) clip_embs = head_model(base_embs) clip_embs /= tf.norm(clip_embs, axis=-1, keepdims=True) return clip_embs demo_images = { "bilgisayarda çalışan bir insan": "myspc.jpeg", "sahilde bir insan ve bir heykel": "mysdk.jpeg" } clip_model, preprocess = clip.load("ViT-B/32") images = {key: Image.open(f"images/{value}") for key, value in demo_images.items()} img_inputs = torch.stack([preprocess(image).to('cpu') for image in images.values()]) with torch.no_grad(): image_embs = clip_model.encode_image(img_inputs).float().to('cpu') image_embs /= image_embs.norm(dim=-1, keepdim=True) image_embs = image_embs.detach().numpy() text_embs = encode_text(base_model, tokenizer, head_model, list(images.keys())).numpy() similarities = image_embs @ text_embs.T logits = tf.nn.softmax(tf.convert_to_tensor(similarities)).numpy() idxs = np.argmax(logits, axis=-1).tolist() for i, (key, value) in enumerate(demo_images.items()): print("path: ", value, "true label: ", key, "prediction: ", list(demo_images.keys())[idxs[i]], "score: ", logits[i, idxs[i]]) ``` Sample images referred in the code snippet above can be found under `images` directory in the gitHub repo. ## How it works `encode_text()` function agregates per-token hidden states outputted by the Distilbert model to produce a single vector per sequence. Then, `clip_head.h5` model projects this vector onto the same vector space as CLIP's text encoder with a single dense layer. First, all the Distilbert layers were frozen an and the head dense layer was trained for a few epochs. Then, freezing was removed and the dense layer was trained with the Distilbert layers for a few more epochs. I created the dataset by machine-translating COCO captions into Turkish. During training, vector representations of English captions outputted by the original CLIP text encoder was used as target values, and MSE between these vectors and `clip_head.h5` outputs were minimized. ## Dataset The dataset and the training notebook will be released soon.
Narrativa/spanish-gpt2-finetuned-rap-lyrics
Narrativa
2021-09-11T08:46:33Z
11
5
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "GPT-2", "Rap", "Lyrics", "Songs", "es", "dataset:large_spanish_corpus", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- language: es tags: - GPT-2 - Rap - Lyrics - Songs datasets: - large_spanish_corpus widget: - text: "Déjame contarte lo importante que es buscarte un plan\nNo para golpearles o ganarles, sino para darles paz\n" license: mit --- # Spanish GPT-2 trained on [Spanish RAP Lyrics](https://www.kaggle.com/smunoz3801/9325-letras-de-rap-en-espaol) Created by: [Narrativa](https://www.narrativa.com/) About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
huggingartists/dababy
huggingartists
2021-09-11T08:01:28Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/dababy", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/dababy tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/b68b0e6ba289b80529dc0194cdb7d00d.639x640x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">DaBaby</div> <a href="https://genius.com/artists/dababy"> <div style="text-align: center; font-size: 14px;">@dababy</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from DaBaby. Dataset is available [here](https://huggingface.co/datasets/huggingartists/dababy). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/dababy") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/qnkumvdw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on DaBaby's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/24o367up) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/24o367up/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/dababy') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/dababy") model = AutoModelWithLMHead.from_pretrained("huggingartists/dababy") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/rocket
huggingartists
2021-09-11T07:31:59Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/rocket", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/rocket tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/0fb709925134799103886db5e722ef73.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ROCKET</div> <a href="https://genius.com/artists/rocket"> <div style="text-align: center; font-size: 14px;">@rocket</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from ROCKET. Dataset is available [here](https://huggingface.co/datasets/huggingartists/rocket). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/rocket") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3ceqmb05/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on ROCKET's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/37kckftd) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/37kckftd/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/rocket') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/rocket") model = AutoModelWithLMHead.from_pretrained("huggingartists/rocket") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/the-the-pigs
huggingartists
2021-09-11T06:35:36Z
3
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/the-the-pigs", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/the-the-pigs tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/2f1fd1b951237ad3387096f392d41fa5.720x720x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">The ‘’Вепри’’ (The Pigs)</div> <a href="https://genius.com/artists/the-the-pigs"> <div style="text-align: center; font-size: 14px;">@the-the-pigs</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from The ‘’Вепри’’ (The Pigs). Dataset is available [here](https://huggingface.co/datasets/huggingartists/the-the-pigs). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/the-the-pigs") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/7yh65db9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on The ‘’Вепри’’ (The Pigs)'s lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/65gj1lk1) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/65gj1lk1/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/the-the-pigs') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/the-the-pigs") model = AutoModelWithLMHead.from_pretrained("huggingartists/the-the-pigs") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
hwaQing/distilbert-base-uncased-finetuned-mrpc-test
hwaQing
2021-09-11T04:10:39Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-mrpc results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.7034313725490197 - name: F1 type: f1 value: 0.8207407407407408 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mrpc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5708 - Accuracy: 0.7034 - F1: 0.8207 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 58 | 0.5708 | 0.7034 | 0.8207 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
megagonlabs/bimeanvae-amzn
megagonlabs
2021-09-11T00:10:54Z
14
0
transformers
[ "transformers", "pytorch", "summarization", "en", "license:bsd-3-clause", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: en tags: - summarization inference: false license: bsd-3-clause --- ## BiMeanVAE model See original GitHub repo for more details [here](https://github.com/megagonlabs/coop)
huggingartists/travis-scott
huggingartists
2021-09-10T19:40:02Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/travis-scott", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/travis-scott tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/5d19fecdb3828ca9ec89dda588e2eb7d.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Travis Scott</div> <a href="https://genius.com/artists/travis-scott"> <div style="text-align: center; font-size: 14px;">@travis-scott</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Travis Scott. Dataset is available [here](https://huggingface.co/datasets/huggingartists/travis-scott). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/travis-scott") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1ezlbvd0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Travis Scott's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2w91gglb) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2w91gglb/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/travis-scott') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/travis-scott") model = AutoModelWithLMHead.from_pretrained("huggingartists/travis-scott") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/xxxtentacion
huggingartists
2021-09-10T19:22:45Z
4
2
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/xxxtentacion", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/xxxtentacion tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/f72572986d8187cf35f0fc9f9d06afb2.900x900x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">XXXTENTACION</div> <a href="https://genius.com/artists/xxxtentacion"> <div style="text-align: center; font-size: 14px;">@xxxtentacion</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from XXXTENTACION. Dataset is available [here](https://huggingface.co/datasets/huggingartists/xxxtentacion). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/xxxtentacion") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/12xi0jh5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on XXXTENTACION's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2l2qvy4j) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2l2qvy4j/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/xxxtentacion') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/xxxtentacion") model = AutoModelWithLMHead.from_pretrained("huggingartists/xxxtentacion") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/pyrokinesis
huggingartists
2021-09-10T16:27:05Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/pyrokinesis", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/pyrokinesis tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/e701c222dfb8725065dd99c8a43988da.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">​​pyrokinesis</div> <a href="https://genius.com/artists/pyrokinesis"> <div style="text-align: center; font-size: 14px;">@pyrokinesis</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from ​​pyrokinesis. Dataset is available [here](https://huggingface.co/datasets/huggingartists/pyrokinesis). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/pyrokinesis") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1s8696f3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on ​​pyrokinesis's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/22hm2utc) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/22hm2utc/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/pyrokinesis') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/pyrokinesis") model = AutoModelWithLMHead.from_pretrained("huggingartists/pyrokinesis") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/aaron-watson
huggingartists
2021-09-10T15:49:57Z
6
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/aaron-watson", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/aaron-watson tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/894021d09a748eef8c6d63ad898b814b.650x430x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Aaron Watson</div> <a href="https://genius.com/artists/aaron-watson"> <div style="text-align: center; font-size: 14px;">@aaron-watson</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Aaron Watson. Dataset is available [here](https://huggingface.co/datasets/huggingartists/aaron-watson). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/aaron-watson") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/14ha1tnc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Aaron Watson's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/34e4zb2v) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/34e4zb2v/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/aaron-watson') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/aaron-watson") model = AutoModelWithLMHead.from_pretrained("huggingartists/aaron-watson") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/burzum
huggingartists
2021-09-10T13:30:58Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/burzum", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/burzum tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/62edc981d303447265d23a3862abce43.589x589x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Burzum</div> <a href="https://genius.com/artists/burzum"> <div style="text-align: center; font-size: 14px;">@burzum</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Burzum. Dataset is available [here](https://huggingface.co/datasets/huggingartists/burzum). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/burzum") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/j34qgww2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Burzum's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3579mrib) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3579mrib/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/burzum') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/burzum") model = AutoModelWithLMHead.from_pretrained("huggingartists/burzum") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/25-17
huggingartists
2021-09-10T12:55:59Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/25-17", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/25-17 tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/4fedc5dd2830a874a5274bf1cac62002.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">25/17</div> <a href="https://genius.com/artists/25-17"> <div style="text-align: center; font-size: 14px;">@25-17</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from 25/17. Dataset is available [here](https://huggingface.co/datasets/huggingartists/25-17). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/25-17") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1iuytbjp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on 25/17's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/knv4l4gw) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/knv4l4gw/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/25-17') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/25-17") model = AutoModelWithLMHead.from_pretrained("huggingartists/25-17") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingtweets/freakytheory-insprepositive-masterythink
huggingtweets
2021-09-10T12:25:07Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/freakytheory-insprepositive-masterythink/1631276702724/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1155938695662505984/H3RmD4Fq_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/861903051669610496/dvuuio0A_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1362638938549018626/O2jBlckS_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Inspiring Quotes - Be Positive & Motivation & Motivation & Success</div> <div style="text-align: center; font-size: 14px;">@freakytheory-insprepositive-masterythink</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Inspiring Quotes - Be Positive & Motivation & Motivation & Success. | Data | Inspiring Quotes - Be Positive | Motivation | Motivation & Success | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 3233 | 706 | | Retweets | 789 | 13 | 4 | | Short tweets | 2 | 10 | 14 | | Tweets kept | 2459 | 3210 | 688 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3aupxbxm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @freakytheory-insprepositive-masterythink's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/p03go3pp) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/p03go3pp/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/freakytheory-insprepositive-masterythink') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingartists/john-lennon
huggingartists
2021-09-10T10:37:44Z
3
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/john-lennon", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/john-lennon tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/de14b272004b51dea8071e7cba21cbac.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">John Lennon</div> <a href="https://genius.com/artists/john-lennon"> <div style="text-align: center; font-size: 14px;">@john-lennon</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from John Lennon. Dataset is available [here](https://huggingface.co/datasets/huggingartists/john-lennon). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/john-lennon") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/f3d8fseh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on John Lennon's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/36mtogkg) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/36mtogkg/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/john-lennon') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/john-lennon") model = AutoModelWithLMHead.from_pretrained("huggingartists/john-lennon") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/kipelov
huggingartists
2021-09-10T08:40:56Z
3
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/kipelov", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/kipelov tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/d4ae6ad73ca63bc97b2a10dfefc47b63.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Кипелов (Kipelov)</div> <a href="https://genius.com/artists/kipelov"> <div style="text-align: center; font-size: 14px;">@kipelov</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Кипелов (Kipelov). Dataset is available [here](https://huggingface.co/datasets/huggingartists/kipelov). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/kipelov") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/225m5y65/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Кипелов (Kipelov)'s lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/38es269x) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/38es269x/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/kipelov') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/kipelov") model = AutoModelWithLMHead.from_pretrained("huggingartists/kipelov") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/vladimir-vysotsky
huggingartists
2021-09-10T07:47:12Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/vladimir-vysotsky", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/vladimir-vysotsky tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/18735fe10bace7b3f615b2da9c95ac73.938x938x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Владимир Высоцкий (Vladimir Vysotsky)</div> <a href="https://genius.com/artists/vladimir-vysotsky"> <div style="text-align: center; font-size: 14px;">@vladimir-vysotsky</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Владимир Высоцкий (Vladimir Vysotsky). Dataset is available [here](https://huggingface.co/datasets/huggingartists/vladimir-vysotsky). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/vladimir-vysotsky") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1w1qc649/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Владимир Высоцкий (Vladimir Vysotsky)'s lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1inrl5qe) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1inrl5qe/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/vladimir-vysotsky') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/vladimir-vysotsky") model = AutoModelWithLMHead.from_pretrained("huggingartists/vladimir-vysotsky") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/mf-doom
huggingartists
2021-09-10T07:07:44Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/mf-doom", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/mf-doom tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/263743633b6e58854e753b25dca6beab.430x430x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">MF DOOM</div> <a href="https://genius.com/artists/mf-doom"> <div style="text-align: center; font-size: 14px;">@mf-doom</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from MF DOOM. Dataset is available [here](https://huggingface.co/datasets/huggingartists/mf-doom). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/mf-doom") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3lhrsfds/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on MF DOOM's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/vw48qbeh) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/vw48qbeh/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/mf-doom') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/mf-doom") model = AutoModelWithLMHead.from_pretrained("huggingartists/mf-doom") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
Aleksandar/distilbert-srb-ner
Aleksandar
2021-09-09T06:27:16Z
14
0
transformers
[ "transformers", "pytorch", "distilbert", "token-classification", "generated_from_trainer", "sr", "dataset:wikiann", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer datasets: - wikiann metrics: - precision - recall - f1 - accuracy language: - sr model_index: - name: distilbert-srb-ner results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann args: sr metric: name: Accuracy type: accuracy value: 0.9576561462374611 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-srb-ner This model was trained from scratch on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.2972 - Precision: 0.8871 - Recall: 0.9100 - F1: 0.8984 - Accuracy: 0.9577 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3818 | 1.0 | 625 | 0.2175 | 0.8175 | 0.8370 | 0.8272 | 0.9306 | | 0.198 | 2.0 | 1250 | 0.1766 | 0.8551 | 0.8732 | 0.8640 | 0.9458 | | 0.1423 | 3.0 | 1875 | 0.1702 | 0.8597 | 0.8763 | 0.8679 | 0.9473 | | 0.079 | 4.0 | 2500 | 0.1774 | 0.8674 | 0.8875 | 0.8773 | 0.9515 | | 0.0531 | 5.0 | 3125 | 0.2011 | 0.8688 | 0.8965 | 0.8825 | 0.9522 | | 0.0429 | 6.0 | 3750 | 0.2082 | 0.8769 | 0.8970 | 0.8868 | 0.9538 | | 0.032 | 7.0 | 4375 | 0.2268 | 0.8764 | 0.8916 | 0.8839 | 0.9528 | | 0.0204 | 8.0 | 5000 | 0.2423 | 0.8726 | 0.8959 | 0.8841 | 0.9529 | | 0.0148 | 9.0 | 5625 | 0.2522 | 0.8774 | 0.8991 | 0.8881 | 0.9538 | | 0.0125 | 10.0 | 6250 | 0.2544 | 0.8823 | 0.9024 | 0.8922 | 0.9559 | | 0.0108 | 11.0 | 6875 | 0.2592 | 0.8780 | 0.9041 | 0.8909 | 0.9553 | | 0.007 | 12.0 | 7500 | 0.2672 | 0.8877 | 0.9056 | 0.8965 | 0.9571 | | 0.0048 | 13.0 | 8125 | 0.2714 | 0.8879 | 0.9089 | 0.8982 | 0.9583 | | 0.0049 | 14.0 | 8750 | 0.2872 | 0.8873 | 0.9068 | 0.8970 | 0.9573 | | 0.0034 | 15.0 | 9375 | 0.2915 | 0.8883 | 0.9114 | 0.8997 | 0.9577 | | 0.0027 | 16.0 | 10000 | 0.2890 | 0.8865 | 0.9103 | 0.8983 | 0.9581 | | 0.0028 | 17.0 | 10625 | 0.2885 | 0.8877 | 0.9085 | 0.8980 | 0.9576 | | 0.0014 | 18.0 | 11250 | 0.2928 | 0.8860 | 0.9073 | 0.8965 | 0.9577 | | 0.0013 | 19.0 | 11875 | 0.2963 | 0.8856 | 0.9099 | 0.8976 | 0.9576 | | 0.001 | 20.0 | 12500 | 0.2972 | 0.8871 | 0.9100 | 0.8984 | 0.9577 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0 - Datasets 1.11.0 - Tokenizers 0.10.1
eugenesiow/han
eugenesiow
2021-09-09T01:59:04Z
150
0
transformers
[ "transformers", "HAN", "super-image", "image-super-resolution", "dataset:eugenesiow/Div2k", "dataset:eugenesiow/Set5", "dataset:eugenesiow/Set14", "dataset:eugenesiow/BSD100", "dataset:eugenesiow/Urban100", "arxiv:2008.08767", "arxiv:2104.07566", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - super-image - image-super-resolution datasets: - eugenesiow/Div2k - eugenesiow/Set5 - eugenesiow/Set14 - eugenesiow/BSD100 - eugenesiow/Urban100 metrics: - pnsr - ssim --- # Holistic Attention Network (HAN) HAN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Single Image Super-Resolution via a Holistic Attention Network](https://arxiv.org/abs/2008.08767) by Niu et al. (2020) and first released in [this repository](https://github.com/wwlCape/HAN). The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling. ![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4](images/han_4_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4") ## Model description Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-of-the-art single image super- resolution approaches. ## Intended uses & limitations You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset. ### How to use The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library: ```bash pip install super-image ``` Here is how to use a pre-trained model to upscale your image: ```python from super_image import HanModel, ImageLoader from PIL import Image import requests url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg' image = Image.open(requests.get(url, stream=True).raw) model = HanModel.from_pretrained('eugenesiow/han', scale=2) # scale 2, 3 and 4 models available inputs = ImageLoader.load_image(image) preds = model(inputs) ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png` ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab") ## Training data The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900). ## Training procedure ### Preprocessing We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566). Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times. During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches. Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image. We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data: ```bash pip install datasets ``` The following code gets the data and preprocesses/augments the data. ```python from datasets import load_dataset from super_image.data import EvalDataset, TrainDataset, augment_five_crop augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\ .map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader ``` ### Pretraining The model was trained on GPU. The training code is provided below: ```python from super_image import Trainer, TrainingArguments, HanModel, HanConfig training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=1000, # total number of training epochs ) config = HanConfig( scale=4, # train a model to upscale 4x ) model = HanModel(config) trainer = Trainer( model=model, # the instantiated model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=eval_dataset # evaluation dataset ) trainer.train() ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab") ## Evaluation results The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm). Evaluation datasets include: - Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5) - Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14) - BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100) - Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100) The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline. |Dataset |Scale |Bicubic |han | |--- |--- |--- |--- | |Set5 |2x |33.64/0.9292 |**** | |Set5 |3x |30.39/0.8678 |**** | |Set5 |4x |28.42/0.8101 |**31.21/0.8778** | |Set14 |2x |30.22/0.8683 |**** | |Set14 |3x |27.53/0.7737 |**** | |Set14 |4x |25.99/0.7023 |**28.18/0.7712** | |BSD100 |2x |29.55/0.8425 |**** | |BSD100 |3x |27.20/0.7382 |**** | |BSD100 |4x |25.96/0.6672 |**28.09/0.7533** | |Urban100 |2x |26.66/0.8408 |**** | |Urban100 |3x | |**** | |Urban100 |4x |23.14/0.6573 |**25.1/0.7497** | ![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2](images/han_2_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2") You can find a notebook to easily run evaluation on pretrained models below: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab") ## BibTeX entry and citation info ```bibtex @misc{niu2020single, title={Single Image Super-Resolution via a Holistic Attention Network}, author={Ben Niu and Weilei Wen and Wenqi Ren and Xiangde Zhang and Lianping Yang and Shuzhen Wang and Kaihao Zhang and Xiaochun Cao and Haifeng Shen}, year={2020}, eprint={2008.08767}, archivePrefix={arXiv}, primaryClass={eess.IV} } ```
elisno/is_core_web_trf
elisno
2021-09-08T21:19:54Z
4
0
spacy
[ "spacy", "token-classification", "is", "model-index", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - spacy - token-classification language: - is model-index: - name: is_core_web_trf results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.9193318395 - name: NER Recall type: recall value: 0.9217728758 - name: NER F Score type: f_score value: 0.9205507394 --- | Feature | Description | | --- | --- | | **Name** | `is_core_web_trf` | | **Version** | `0.0.0` | | **spaCy** | `>=3.1.1,<3.2.0` | | **Default Pipeline** | `transformer`, `ner`, `tagger`, `parser` | | **Components** | `transformer`, `ner`, `tagger`, `parser` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (591 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `Date`, `Location`, `Miscellaneous`, `Money`, `Organization`, `Percent`, `Person`, `Time` | | **`tagger`** | `aa`, `aae`, `aam`, `af`, `afe`, `afm`, `au`, `c`, `cn`, `ct`, `e`, `fahee`, `fahen`, `faheo`, `faheþ`, `fahfe`, `fahfn`, `fahfo`, `fahfþ`, `fakee`, `faken`, `fakeo`, `fakeþ`, `fakfe`, `fakfn`, `fakfo`, `fakfþ`, `favee`, `faven`, `faveo`, `faveþ`, `favfe`, `favfn`, `favfo`, `favfþ`, `fbhee`, `fbhen`, `fbheo`, `fbheþ`, `fbhfe`, `fbhfn`, `fbhfo`, `fbhfþ`, `fbkee`, `fbken`, `fbkeo`, `fbkeþ`, `fbkfe`, `fbkfn`, `fbkfo`, `fbkfþ`, `fbvee`, `fbven`, `fbveo`, `fbveþ`, `fbvfe`, `fbvfn`, `fbvfo`, `fbvfþ`, `fehee`, `fehen`, `feheo`, `feheþ`, `fehfe`, `fehfn`, `fehfo`, `fehfþ`, `fekee`, `feken`, `fekeo`, `fekeþ`, `fekfe`, `fekfn`, `fekfo`, `fekfþ`, `fevee`, `feven`, `feveo`, `feveþ`, `fevfe`, `fevfn`, `fevfo`, `fevfþ`, `fohee`, `fohen`, `foheo`, `foheþ`, `fohfe`, `fohfn`, `fohfo`, `fohfþ`, `fokee`, `foken`, `fokeo`, `fokeþ`, `fokfe`, `fokfn`, `fokfo`, `fokfþ`, `fovee`, `foven`, `foveo`, `foveþ`, `fovfe`, `fovfn`, `fovfo`, `fovfþ`, `fp1ee`, `fp1en`, `fp1eo`, `fp1eþ`, `fp1fe`, `fp1fn`, `fp1fo`, `fp1fþ`, `fp2ee`, `fp2en`, `fp2eo`, `fp2eþ`, `fp2fe`, `fp2fn`, `fp2fo`, `fp2fþ`, `fphee`, `fphen`, `fpheo`, `fpheþ`, `fphfe`, `fphfn`, `fphfo`, `fphfþ`, `fpkee`, `fpken`, `fpkeo`, `fpkeþ`, `fpkfe`, `fpkfn`, `fpkfo`, `fpkfþ`, `fpvee`, `fpven`, `fpveo`, `fpveþ`, `fpvfe`, `fpvfn`, `fpvfo`, `fpvfþ`, `fshee`, `fshen`, `fsheo`, `fsheþ`, `fshfe`, `fshfn`, `fshfo`, `fshfþ`, `fskee`, `fsken`, `fskeo`, `fskeþ`, `fskfe`, `fskfn`, `fskfo`, `fskfþ`, `fsvee`, `fsven`, `fsveo`, `fsveþ`, `fsvfe`, `fsvfn`, `fsvfo`, `fsvfþ`, `ghee`, `ghen`, `gheo`, `gheþ`, `ghfe`, `ghfn`, `ghfo`, `ghfþ`, `gkee`, `gken`, `gkeo`, `gkeþ`, `gkfe`, `gkfn`, `gkfo`, `gkfþ`, `gvee`, `gven`, `gveo`, `gveþ`, `gvfe`, `gvfn`, `gvfo`, `gvfþ`, `ks`, `kt`, `lheeof`, `lheesf`, `lheeve`, `lheevf`, `lheevm`, `lhenof`, `lhense`, `lhensf`, `lhenve`, `lhenvf`, `lhenvm`, `lheoof`, `lheose`, `lheosf`, `lheosm`, `lheove`, `lheovf`, `lheovm`, `lheþof`, `lheþse`, `lheþsf`, `lheþve`, `lheþvf`, `lheþvm`, `lhfeof`, `lhfese`, `lhfesf`, `lhfeve`, `lhfevf`, `lhfevm`, `lhfnof`, `lhfnse`, `lhfnsf`, `lhfnve`, `lhfnvf`, `lhfnvm`, `lhfoof`, `lhfose`, `lhfosf`, `lhfove`, `lhfovf`, `lhfovm`, `lhfþof`, `lhfþse`, `lhfþsf`, `lhfþve`, `lhfþvf`, `lhfþvm`, `lkeeof`, `lkeesf`, `lkeeve`, `lkeevf`, `lkeevm`, `lkenof`, `lkense`, `lkensf`, `lkenve`, `lkenvf`, `lkenvm`, `lkeoof`, `lkeose`, `lkeosf`, `lkeove`, `lkeovf`, `lkeovm`, `lkeþof`, `lkeþse`, `lkeþsf`, `lkeþve`, `lkeþvf`, `lkeþvm`, `lkfeof`, `lkfese`, `lkfesf`, `lkfeve`, `lkfevf`, `lkfevm`, `lkfnof`, `lkfnse`, `lkfnsf`, `lkfnve`, `lkfnvf`, `lkfnvm`, `lkfoof`, `lkfose`, `lkfosf`, `lkfove`, `lkfovf`, `lkfovm`, `lkfþof`, `lkfþse`, `lkfþsf`, `lkfþsm`, `lkfþve`, `lkfþvf`, `lkfþvm`, `lveeof`, `lveese`, `lveesf`, `lveeve`, `lveevf`, `lveevm`, `lvenof`, `lvense`, `lvensf`, `lvenve`, `lvenvf`, `lvenvm`, `lveoof`, `lveose`, `lveosf`, `lveove`, `lveovf`, `lveovm`, `lveþof`, `lveþse`, `lveþsf`, `lveþve`, `lveþvf`, `lveþvm`, `lvfeof`, `lvfese`, `lvfesf`, `lvfeve`, `lvfevf`, `lvfevm`, `lvfnof`, `lvfnse`, `lvfnsf`, `lvfnve`, `lvfnvf`, `lvfnvm`, `lvfoof`, `lvfose`, `lvfosf`, `lvfove`, `lvfovf`, `lvfovm`, `lvfþof`, `lvfþse`, `lvfþsf`, `lvfþsm`, `lvfþve`, `lvfþvf`, `lvfþvm`, `m`, `n----s`, `n-ee`, `n-ee-s`, `n-en`, `n-en-s`, `n-eng`, `n-eo`, `n-eo-s`, `n-eþ`, `n-eþ-s`, `n-fn`, `nhee`, `nhee-s`, `nheeg`, `nheegs`, `nhen`, `nhen-s`, `nheng`, `nhengs`, `nheo`, `nheo-s`, `nheog`, `nheogs`, `nheþ`, `nheþ-s`, `nheþg`, `nheþgs`, `nhfe`, `nhfe-s`, `nhfeg`, `nhfegs`, `nhfn`, `nhfn-s`, `nhfng`, `nhfngs`, `nhfo`, `nhfo-s`, `nhfog`, `nhfogs`, `nhfþ`, `nhfþ-s`, `nhfþg`, `nhfþgs`, `nkee`, `nkee-s`, `nkeeg`, `nkeegs`, `nken`, `nken-s`, `nkeng`, `nkengs`, `nkeo`, `nkeo-s`, `nkeog`, `nkeogs`, `nkeþ`, `nkeþ-s`, `nkeþg`, `nkeþgs`, `nkfe`, `nkfe-s`, `nkfeg`, `nkfegs`, `nkfn`, `nkfn-s`, `nkfng`, `nkfngs`, `nkfo`, `nkfo-s`, `nkfog`, `nkfogs`, `nkfþ`, `nkfþ-s`, `nkfþg`, `nkfþgs`, `nvee`, `nvee-s`, `nveeg`, `nveegs`, `nven`, `nven-s`, `nveng`, `nvengs`, `nveo`, `nveo-s`, `nveog`, `nveogs`, `nveþ`, `nveþ-s`, `nveþg`, `nveþgs`, `nvfe`, `nvfe-s`, `nvfeg`, `nvfegs`, `nvfn`, `nvfn-s`, `nvfng`, `nvfngs`, `nvfo`, `nvfo-s`, `nvfog`, `nvfogs`, `nvfþ`, `nvfþ-s`, `nvfþg`, `nvfþgs`, `pa`, `pg`, `pk`, `pl`, `sbg2en`, `sbg2fn`, `sbm2en`, `sbm2fn`, `sfg1en`, `sfg1eþ`, `sfg1fn`, `sfg1fþ`, `sfg2en`, `sfg2eþ`, `sfg2fn`, `sfg2fþ`, `sfg3en`, `sfg3eþ`, `sfg3fn`, `sfg3fþ`, `sfm1en`, `sfm1eþ`, `sfm1fn`, `sfm1fþ`, `sfm2en`, `sfm2eþ`, `sfm2fn`, `sfm2fþ`, `sfm3en`, `sfm3eþ`, `sfm3fn`, `sfm3fþ`, `slg`, `sng`, `snm`, `svg1en`, `svg1eþ`, `svg1fn`, `svg1fþ`, `svg2en`, `svg2eþ`, `svg2fn`, `svg2fþ`, `svg3en`, `svg3eþ`, `svg3fn`, `svg3fþ`, `svm1en`, `svm1eþ`, `svm1fn`, `svm1fþ`, `svm2en`, `svm2eþ`, `svm2fn`, `svm3en`, `svm3eþ`, `svm3fn`, `svm3fþ`, `sþghen`, `sþgheo`, `sþghfn`, `sþghfo`, `sþgken`, `sþgkeo`, `sþgkfn`, `sþgkfo`, `sþgven`, `sþgveo`, `sþgvfn`, `sþgvfo`, `sþgvfþ`, `sþmhen`, `sþmheo`, `sþmken`, `sþmven`, `ta`, `tfhee`, `tfhen`, `tfheo`, `tfheþ`, `tfhfe`, `tfhfn`, `tfhfo`, `tfhfþ`, `tfkee`, `tfken`, `tfkeo`, `tfkeþ`, `tfkfe`, `tfkfn`, `tfkfo`, `tfkfþ`, `tfvee`, `tfven`, `tfveo`, `tfveþ`, `tfvfe`, `tfvfn`, `tfvfo`, `tfvfþ`, `to`, `tp`, `v`, `x` | | **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `dep`, `det`, `fixed`, `flat:name`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nummod`, `obj`, `obl`, `obl:arg`, `parataxis`, `punct`, `xcomp` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 92.06 | | `ENTS_P` | 91.93 | | `ENTS_R` | 92.18 | | `TRANSFORMER_LOSS` | 248325.98 | | `NER_LOSS` | 120059.07 |
Ashkanmh/bert-base-parsbert-uncased-finetuned
Ashkanmh
2021-09-08T20:56:15Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer model-index: - name: bert-base-parsbert-uncased-finetuned results: - task: name: Masked Language Modeling type: fill-mask --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-parsbert-uncased-finetuned This model is a fine-tuned version of [HooshvareLab/bert-base-parsbert-uncased](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2045 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.5596 | 1.0 | 515 | 3.2097 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
huggingtweets/brad_buchsbaum
huggingtweets
2021-09-08T19:43:10Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1393736501838721031/DCd35uGN_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">bbuchsbaum</div> <div style="text-align: center; font-size: 14px;">@brad_buchsbaum</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from bbuchsbaum. | Data | bbuchsbaum | | --- | --- | | Tweets downloaded | 1346 | | Retweets | 125 | | Short tweets | 53 | | Tweets kept | 1168 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/uivlvhob/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @brad_buchsbaum's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34xkida2) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34xkida2/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/brad_buchsbaum') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
LeoCordoba/mt5-small-cc-news-es-titles
LeoCordoba
2021-09-08T17:03:30Z
14
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "summarization", "spanish", "es", "dataset:LeoCordoba/CC-NEWS-ES-titles", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- language: es tags: - summarization - mt5 - spanish license: apache-2.0 datasets: - LeoCordoba/CC-NEWS-ES-titles model-index: - name: mt5-small-ccnews-titles-es results: - task: name: Abstractive Text Summarization type: abstractive-text-summarization dataset: name: "CCNEWS-ES-titles" type: LeoCordoba/CC-NEWS-ES-titles metrics: - name: Validation ROGUE-1 type: rogue-1 value: 22.6623 - name: Validation ROGUE-2 type: rogue-2 value: 7.7894 - name: Validation ROGUE-L type: rogue-l value: 19.8015 - name: Validation ROGUE-Lsum type: rogue-lsum value: 19.8092 - name: Test ROGUE-1 type: rogue-1 value: 22.9263 - name: Test ROGUE-2 type: rogue-2 value: 7.9146 - name: Test ROGUE-L type: rogue-l value: 20.0272 - name: Test ROGUE-Lsum type: rogue-lsum value: 20.0387 widget: - text: "La chocotorta, el tradicional y práctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por críticos de restaurants internacionales, a casi 40 años de su creación. El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. “Este postre argentino sin hornear fue influenciado por la cocina italiana y se inspiró en el famoso tiramisú italiano. Está elaborado con tres ingredientes básicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la página web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votación, superó también a los waffles belgas y el zserbó húngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaña al listón dorado de “postre número uno“, los expertos enseñan además cómo se hacen las chocotortas, paso por paso. “Las galletas se ablandan en leche y se cubren con una combinación de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, café o incluso licor de café”, detallan. Por último, adjudican su creación a una “campaña de márketing” diseñada para promover las galletitas icónicas que le dan su nombre. La chocotorta, infaltable en los cumpleaños argentinos, fue creada en 1982 por una creativa de las agencias más importantes del país, Marité Mabragaña." --- ## Hyperparameters { "max_target_length": 64, "model_name_or_path": "google/mt5-small", "num_train_epochs": 3, "seed": 7, "summary_column": "output_text", "text_column": "text", "encoder_max_length" : 512, "decoder_max_length" :36, "batch_size" : 128 } ## Usage ``` article = """ La chocotorta, el tradicional y práctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por críticos de restaurants internacionales, a casi 40 años de su creación. El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. “Este postre argentino sin hornear fue influenciado por la cocina italiana y se inspiró en el famoso tiramisú italiano. Está elaborado con tres ingredientes básicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la página web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votación, superó también a los waffles belgas y el zserbó húngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaña al listón dorado de “postre número uno", los expertos enseñan además cómo se hacen las chocotortas, paso por paso. “Las galletas se ablandan en leche y se cubren con una combinación de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, café o incluso licor de café”, detallan. Por último, adjudican su creación a una “campaña de márketing” diseñada para promover las galletitas icónicas que le dan su nombre. La chocotorta, infaltable en los cumpleaños argentinos, fue creada en 1982 por una creativa de las agencias más importantes del país, Marité Mabragaña. """ from transformers import pipeline summarizer = pipeline("summarization", model="LeoCordoba/mt5-small-ccnews-titles-es") summarizer(article, min_length=5, max_length=64) ``` ## Results | metric | score | | --- | ----- | | eval_loss | 2.879085063934326 | | eval_rouge1 | 22.6623 | | eval_rouge2 | 7.7894 | | eval_rougeL | 19.8015, | | eval_rougeLsum | 19.8092 | | eval_gen_len | 17.1839 | | test_loss | 2.878429412841797 | | test_rouge1 | 22.9263 | | test_rouge2 | 7.9146 | | test_rougeL | 20.0272 | | test_rougeLsum | 20.0387 | | test_gen_len | 17.1696 |
sv/gpt2-nft-poetry
sv
2021-09-08T16:15:47Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - null model-index: - name: gpt2-nft-poetry results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-nft-poetry This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.0243 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 282 | 4.3092 | | 4.5403 | 2.0 | 564 | 4.1283 | | 4.5403 | 3.0 | 846 | 4.0605 | | 4.039 | 4.0 | 1128 | 4.0321 | | 4.039 | 5.0 | 1410 | 4.0243 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
Jeffrey/DialoGPT-small-Jeffrey
Jeffrey
2021-09-08T15:53:25Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- tags: - conversational ---
charlecheng/distilbert-base-uncased-finetuned-ner
charlecheng
2021-09-08T03:51:22Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9276454293628809 - name: Recall type: recall value: 0.9365700861393892 - name: F1 type: f1 value: 0.9320863950122468 - name: Accuracy type: accuracy value: 0.9840500738716699 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0607 - Precision: 0.9276 - Recall: 0.9366 - F1: 0.9321 - Accuracy: 0.9841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.246 | 1.0 | 878 | 0.0696 | 0.9152 | 0.9215 | 0.9183 | 0.9812 | | 0.0518 | 2.0 | 1756 | 0.0606 | 0.9196 | 0.9342 | 0.9269 | 0.9831 | | 0.0309 | 3.0 | 2634 | 0.0607 | 0.9276 | 0.9366 | 0.9321 | 0.9841 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
fihtrotuld/123
fihtrotuld
2021-09-08T01:35:59Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
import requests API_URL = "https://api-inference.huggingface.co/models/huggingface/prunebert-base-uncased-6-finepruned-w-distil-squad" headers = {"Authorization": "Bearer api_UXqrzQBiZKXaWxstVwEKcYvHQpGSGiQGbr"} def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json() output = query({ "inputs": { "question": "What's my name?", "context": "My name is Clara and I live in Berkeley.", }, })
bigscience/tr1-13B-tensorboard
bigscience
2021-09-07T21:39:49Z
0
8
null
[ "tensorboard", "region:us" ]
null
2022-03-02T23:29:05Z
These are tensorboard logs for https://github.com/bigscience-workshop/bigscience/tree/master/train/tr1-13B-base
nateraw/timm-adv_inception_v3
nateraw
2021-09-07T19:26:53Z
0
0
timm
[ "timm", "image-classification", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - image-classification - timm library_tag: timm --- # Model card for `timm-resnet50-beans` **TODO** **For now, try dragging and dropping this image into the inference widget. It should classify as angular_leaf_spot.** ![leaf_example](angular_leaf_spot_train.304.jpg)
kamalkraj/bioelectra-base-discriminator-pubmed
kamalkraj
2021-09-07T13:52:16Z
810
6
transformers
[ "transformers", "pytorch", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
## BioELECTRA:Pretrained Biomedical text Encoder using Discriminators Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset. For a detailed description and experimental results, please refer to our paper [BioELECTRA:Pretrained Biomedical text Encoder using Discriminators](https://www.aclweb.org/anthology/2021.bionlp-1.16/). Cite our paper using below citation ``` @inproceedings{kanakarajan-etal-2021-bioelectra, title = "{B}io{ELECTRA}:Pretrained Biomedical text Encoder using Discriminators", author = "Kanakarajan, Kamal raj and Kundumani, Bhuvana and Sankarasubbu, Malaikannan", booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing", month = jun, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.bionlp-1.16", doi = "10.18653/v1/2021.bionlp-1.16", pages = "143--154", abstract = "Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. We apply {`}replaced token detection{'} pretraining technique proposed by ELECTRA and pretrain a biomedical language model from scratch using biomedical text and vocabulary. We introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA for the Biomedical domain. WE evaluate our model on the BLURB and BLUE biomedical NLP benchmarks. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 different NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34{\%}(1.39{\%} accuracy improvement) on MedNLI and 64{\%} (2.98{\%} accuracy improvement) on PubMedQA dataset.", } ``` ## How to use the discriminator in `transformers` ```python from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed") tokenizer = ElectraTokenizerFast.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed") sentence = "The quick brown fox jumps over the lazy dog" fake_sentence = "The quick brown fox fake over the lazy dog" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % int(prediction), end="") for prediction in predictions[0].tolist()] ```
RJ3vans/CLNspanTagger
RJ3vans
2021-09-07T13:24:46Z
4
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
This model identifies compound nouns in input sentences. Try the test sentence: I love apples [and] potatoes. Accuracy is best when you place square brackets around the coordinating conjunction. The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton.
yannobla/Sunshine2
yannobla
2021-09-07T11:41:51Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
>>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("Hello I'm a [MASK] model.") [{'sequence': "[CLS] hello i'm a fashion model. [SEP]", 'score': 0.1073106899857521, 'token': 4827, 'token_str': 'fashion'}, {'sequence': "[CLS] hello i'm a role model. [SEP]", 'score': 0.08774490654468536, 'token': 2535, 'token_str': 'role'}, {'sequence': "[CLS] hello i'm a new model. [SEP]", 'score': 0.05338378623127937, 'token': 2047, 'token_str': 'new'}, {'sequence': "[CLS] hello i'm a super model. [SEP]", 'score': 0.04667217284440994, 'token': 3565, 'token_str': 'super'}, {'sequence': "[CLS] hello i'm a fine model. [SEP]", 'score': 0.027095865458250046, 'token': 2986, 'token_str': 'fine'}] from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input)
lincoln/mbart-mlsum-automatic-summarization
lincoln
2021-09-07T08:21:55Z
85
7
transformers
[ "transformers", "pytorch", "tf", "mbart", "text2text-generation", "summarization", "bart", "fr", "dataset:MLSUM", "arxiv:2004.14900", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: - fr license: mit datasets: - MLSUM pipeline_tag: "summarization" widget: - text: « La veille de l’ouverture, je vais faire venir un coach pour les salariés qui reprendront le travail. Cela va me coûter 300 euros, mais après des mois d’oisiveté obligatoire, la reprise n’est pas simple. Certains sont au chômage partiel depuis mars 2020 », raconte Alain Fontaine, propriétaire du restaurant Le Mesturet, dans le quartier de la Bourse, à Paris. Cette date d’ouverture, désormais, il la connaît. Emmanuel Macron a, en effet, donné le feu vert pour un premier accueil des clients en terrasse, mercredi 19 mai. M. Fontaine imagine même faire venir un orchestre ce jour-là pour fêter l’événement. Il lui reste toutefois à construire sa terrasse. Il pensait que les ouvriers passeraient samedi 1er mai pour l’installer, mais, finalement, le rendez-vous a été décalé. Pour l’instant, le tas de bois est entreposé dans la salle de restaurant qui n’a plus accueilli de convives depuis le 29 octobre 2020, quand le couperet de la fermeture administrative est tombé.M. Fontaine, président de l’Association française des maîtres restaurateurs, ne manquera pas de concurrents prêts à profiter de ce premier temps de réouverture des bars et restaurants. Même si le couvre-feu limite le service à 21 heures. D’autant que la Mairie de Paris vient d’annoncer le renouvellement des terrasses éphémères installées en 2020 et leur gratuité jusqu’à la fin de l’été. tags: - summarization - mbart - bart --- # Résumé automatique d'article de presses Ce modèles est basé sur le modèle [`facebook/mbart-large-50`](https://huggingface.co/facebook/mbart-large-50) et été fine-tuné en utilisant des articles de presse issus de la base de données MLSUM. L'hypothèse à été faite que les chapeaux des articles faisaient de bon résumés de référence. ## Entrainement Nous avons testé deux architecture de modèles (T5 et BART) avec des textes en entrée de 512 ou 1024 tokens. Finallement c'est le modèle BART avec 512 tokens qui à été retenu. Il a été entrainé sur 2 epochs (~700K articles) sur une Tesla V100 (32 heures d'entrainement). ## Résultats ![Score de novelty](assets/novelty.png) Nous avons comparé notre modèle (`mbart-large-512-full` sur le graphique) à deux références: * MBERT qui correspond aux performances du modèle entrainé par l'équipe à l'origine de la base d'articles MLSUM * Barthez qui est un autre modèle basé sur des articles de presses issus de la base de données OrangeSum On voit que le score de novelty (cf papier MLSUM) de notre modèle n'est pas encore comparable à ces deux références et encore moins à une production humaine néanmoins les résumés générés sont dans l'ensemble de bonne qualité. ## Utilisation ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer from transformers import SummarizationPipeline model_name = 'lincoln/mbart-mlsum-automatic-summarization' loaded_tokenizer = AutoTokenizer.from_pretrained(model_name) loaded_model = AutoModelForSeq2SeqLM.from_pretrained(model_name) nlp = SummarizationPipeline(model=loaded_model, tokenizer=loaded_tokenizer) nlp(""" « La veille de l’ouverture, je vais faire venir un coach pour les salariés qui reprendront le travail. Cela va me coûter 300 euros, mais après des mois d’oisiveté obligatoire, la reprise n’est pas simple. Certains sont au chômage partiel depuis mars 2020 », raconte Alain Fontaine, propriétaire du restaurant Le Mesturet, dans le quartier de la Bourse, à Paris. Cette date d’ouverture, désormais, il la connaît. Emmanuel Macron a, en effet, donné le feu vert pour un premier accueil des clients en terrasse, mercredi 19 mai. M. Fontaine imagine même faire venir un orchestre ce jour-là pour fêter l’événement. Il lui reste toutefois à construire sa terrasse. Il pensait que les ouvriers passeraient samedi 1er mai pour l’installer, mais, finalement, le rendez-vous a été décalé. Pour l’instant, le tas de bois est entreposé dans la salle de restaurant qui n’a plus accueilli de convives depuis le 29 octobre 2020, quand le couperet de la fermeture administrative est tombé.M. Fontaine, président de l’Association française des maîtres restaurateurs, ne manquera pas de concurrents prêts à profiter de ce premier temps de réouverture des bars et restaurants. Même si le couvre-feu limite le service à 21 heures. D’autant que la Mairie de Paris vient d’annoncer le renouvellement des terrasses éphémères installées en 2020 et leur gratuité jusqu’à la fin de l’été. """) ``` ## Citation ```bibtex @article{scialom2020mlsum, title={MLSUM: The Multilingual Summarization Corpus}, author={Thomas Scialom and Paul-Alexis Dray and Sylvain Lamprier and Benjamin Piwowarski and Jacopo Staiano}, year={2020}, eprint={2004.14900}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
pritoms/gpt-neo-125M-finetuned-pgt
pritoms
2021-09-07T08:20:52Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt_neo", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - null model-index: - name: gpt-neo-125M-finetuned-pgt results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt-neo-125M-finetuned-pgt This model is a fine-tuned version of [pritoms/gpt-neo-125M-finetuned-pgt](https://huggingface.co/pritoms/gpt-neo-125M-finetuned-pgt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6026 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 26 | 1.5947 | | No log | 2.0 | 52 | 1.5963 | | No log | 3.0 | 78 | 1.6026 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
MaryaAI/opus-mt-ar-en-finetuned-ar-to-en
MaryaAI
2021-09-07T07:26:24Z
251
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "dataset:opus_wikipedia", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer datasets: - opus_wikipedia model-index: - name: opus-mt-ar-en-finetuned-ar-to-en results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: opus_wikipedia type: opus_wikipedia args: ar-en --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-ar-en-finetuned-ar-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on the opus_wikipedia dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
sv/gpt2-finetuned-nft-shakes-seuss-2
sv
2021-09-07T06:05:36Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - null model-index: - name: gpt2-finetuned-nft-shakes-seuss-2 results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-finetuned-nft-shakes-seuss-2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9547 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.3454 | 1.0 | 1490 | 4.1027 | | 4.0534 | 2.0 | 2980 | 3.9857 | | 3.9384 | 3.0 | 4470 | 3.9547 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
espnet/xuankai_chang_librispeech_asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp_26epoch
espnet
2021-09-07T03:05:41Z
2
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:librispeech", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - librispeech license: cc-by-4.0 inference: false --- # ESPnet2 ASR pretrained model ## `Xuankai Chang/xuankai_chang_librispeech_asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp_26epoch, fs=16k, lang=en` This model was trained by Takashi Maekaku using librispeech recipe in [espnet](https://github.com/espnet/espnet/). ### Python API ```text See https://github.com/espnet/espnet_model_zoo ``` ### Evaluate in the recipe ```python # coming soon ``` ### Results ```bash # RESULTS ## Environments - date: `Fri Aug 6 11:44:39 JST 2021` - python version: `3.7.9 (default, Apr 23 2021, 13:48:31) [GCC 5.5.0 20171010]` - espnet version: `espnet 0.9.9` - pytorch version: `pytorch 1.7.0` - Git hash: `0f7558a716ab830d0c29da8785840124f358d47b` - Commit date: `Tue Jun 8 15:33:49 2021 -0400` ## asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|54402|98.5|1.3|0.2|0.2|1.7|22.1| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|50948|96.8|2.8|0.4|0.3|3.4|33.7| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|52576|98.4|1.4|0.2|0.2|1.8|22.1| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|52343|96.8|2.8|0.4|0.4|3.6|36.0| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|288456|99.6|0.2|0.2|0.2|0.6|22.1| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|265951|98.8|0.6|0.6|0.3|1.5|33.7| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|281530|99.6|0.2|0.2|0.2|0.6|22.1| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|272758|98.9|0.5|0.5|0.4|1.4|36.0| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|68010|98.2|1.3|0.5|0.4|2.2|22.1| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|63110|96.0|2.8|1.2|0.6|4.6|33.7| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|65818|98.1|1.3|0.6|0.4|2.3|22.1| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|65101|96.0|2.7|1.3|0.6|4.6|36.0| ``` ### Training config See full config in [`config.yaml`](./exp/asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp/config.yaml) ```yaml config: conf/tuning/train_asr_conformer7_hubert_960hr_large.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp ngpu: 3 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 3 local_rank: 3 dist_master_addr: localhost dist_master_port: 33643 dist_launcher: null multiprocessing_distributed: true cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true ```
huggingtweets/dynatronne
huggingtweets
2021-09-07T01:15:25Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/dynatronne/1630977321484/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1396079009604280325/W6petcWe_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">dt keith katze</div> <div style="text-align: center; font-size: 14px;">@dynatronne</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from dt keith katze. | Data | dt keith katze | | --- | --- | | Tweets downloaded | 3009 | | Retweets | 2428 | | Short tweets | 142 | | Tweets kept | 439 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/26uf3rn6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dynatronne's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3qxjo6s7) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3qxjo6s7/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dynatronne') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/prageru
huggingtweets
2021-09-07T00:19:57Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/prageru/1630973993106/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1412803355164872704/XvTlBoPh_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">PragerU</div> <div style="text-align: center; font-size: 14px;">@prageru</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from PragerU. | Data | PragerU | | --- | --- | | Tweets downloaded | 3248 | | Retweets | 336 | | Short tweets | 492 | | Tweets kept | 2420 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3er68epp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @prageru's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1lv63sll) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1lv63sll/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/prageru') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/itskillerdog
huggingtweets
2021-09-06T23:46:38Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/itskillerdog/1630971994166/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1355537154538000391/0mOGv6Mw_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">june party corner</div> <div style="text-align: center; font-size: 14px;">@itskillerdog</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from june party corner. | Data | june party corner | | --- | --- | | Tweets downloaded | 196 | | Retweets | 20 | | Short tweets | 30 | | Tweets kept | 146 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1u7twx27/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itskillerdog's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1vg0bbs8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1vg0bbs8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/itskillerdog') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/liam_100000
huggingtweets
2021-09-06T23:32:16Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/liam_100000/1630971132171/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1426930394297819137/-zzMnfJo_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">LIAM</div> <div style="text-align: center; font-size: 14px;">@liam_100000</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from LIAM. | Data | LIAM | | --- | --- | | Tweets downloaded | 1960 | | Retweets | 135 | | Short tweets | 434 | | Tweets kept | 1391 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1sila7bw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @liam_100000's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2bu2qvu3) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2bu2qvu3/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/liam_100000') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/formernumber
huggingtweets
2021-09-06T21:05:59Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/formernumber/1630962355855/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1430593525108903940/vrSks7ph_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">NaN</div> <div style="text-align: center; font-size: 14px;">@formernumber</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from NaN. | Data | NaN | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 146 | | Short tweets | 554 | | Tweets kept | 2550 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1cmch3y4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @formernumber's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2iurxhit) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2iurxhit/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/formernumber') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
julien-c/dummy-for-flat
julien-c
2021-09-06T21:02:55Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
in the editor i only change this line Example of a hf.co repo containing signed commits. hello tabs
yseop/FNP_T5_D2T_complete
yseop
2021-09-06T20:54:21Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# T5-base data to text model specialized for Finance NLG __complete version__ ---- ## Usage (HuggingFace Transformers) #### Call the model ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("yseop/FNP_T5_D2T_complete") model = AutoModelForSeq2SeqLM.from_pretrained("yseop/FNP_T5_D2T_complete") text = ["Group profit | valIs | € 115.7 million && € 115.7 million | dTime | in 2019"] ``` #### Choose a generation method ```python input_ids = tokenizer.encode(": {}".format(text), return_tensors="pt") p = 0.82 k = 90 outputs = model.generate(input_ids, do_sample=True, top_p=p, top_k=k, early_stopping=True) print(tokenizer.decode(outputs[0])) ``` ```python input_ids = tokenizer.encode(": {}".format(text), return_tensors="pt") outputs = model.generate(input_ids, max_length=200, num_beams=2, repetition_penalty=2.5, top_k=50, top_p=0.98, length_penalty=1.0, early_stopping=True) print(tokenizer.decode(outputs[0])) ``` **Created by:** [Yseop](https://www.yseop.com/) | Pioneer in Natural Language Generation (NLG) technology. Scaling human expertise through Natural Language Generation.
sv/gpt2-finetuned-nft-shakes
sv
2021-09-06T16:59:11Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - null model-index: - name: gpt2-finetuned-nft-shakes results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-finetuned-nft-shakes This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.7566 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 306 | 3.9679 | | 4.2957 | 2.0 | 612 | 3.7979 | | 4.2957 | 3.0 | 918 | 3.7566 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
huggingtweets/cafe_orbitinnit
huggingtweets
2021-09-06T15:52:25Z
3
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/cafe_orbitinnit/1630943541910/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1429115399975497731/JZdA725e_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">✨たち Tommy’s an Orbit 🌙 たち✨</div> <div style="text-align: center; font-size: 14px;">@cafe_orbitinnit</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ✨たち Tommy’s an Orbit 🌙 たち✨. | Data | ✨たち Tommy’s an Orbit 🌙 たち✨ | | --- | --- | | Tweets downloaded | 2242 | | Retweets | 1336 | | Short tweets | 323 | | Tweets kept | 583 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/qhrvba17/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cafe_orbitinnit's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2qnyhuxd) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2qnyhuxd/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/cafe_orbitinnit') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
BSC-LT/gpt2-large-bne
BSC-LT
2021-09-06T14:13:06Z
26
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "national library of spain", "spanish", "bne", "es", "dataset:bne", "arxiv:2107.07253", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- language: - es license: apache-2.0 tags: - "national library of spain" - "spanish" - "bne" datasets: - "bne" metrics: - "ppl" --- # GPT2-large trained with data from National Library of Spain (BNE) ## Model Description GPT2-large-bne is a transformer-based model for the Spanish language. It is based on the [GPT-2](http://www.persagen.com/files/misc/radford2019language.pdf) model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Training corpora and preprocessing The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text. Some of the statistics of the corpus: | Corpora | Number of documents | Number of tokens | Size (GB) | |---------|---------------------|------------------|-----------| | BNE | 201,080,084 | 135,733,450,668 | 570GB | ## Tokenization and pre-training The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [GPT-2](http://www.persagen.com/files/misc/radford2019language.pdf) model with a vocabulary size of 50,262 tokens. The GPT2-large-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2. The training lasted a total of 10 days with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM. ## Evaluation and results For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish). ## Citing Check out our paper for all the details: https://arxiv.org/abs/2107.07253 ``` @misc{gutierrezfandino2021spanish, title={Spanish Language Models}, author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas}, year={2021}, eprint={2107.07253}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
huggingartists/max-korzh
huggingartists
2021-09-06T13:34:44Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/max-korzh", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/max-korzh tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/a1486b5b6f28eeec202b55e983e464c5.567x567x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Макс Корж (Max Korzh)</div> <a href="https://genius.com/artists/max-korzh"> <div style="text-align: center; font-size: 14px;">@max-korzh</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Макс Корж (Max Korzh). Dataset is available [here](https://huggingface.co/datasets/huggingartists/max-korzh). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/max-korzh") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2lupo5gy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Макс Корж (Max Korzh)'s lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1pm64gaa) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1pm64gaa/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/max-korzh') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/max-korzh") model = AutoModelWithLMHead.from_pretrained("huggingartists/max-korzh") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
lewtun/metnet-test-4
lewtun
2021-09-06T11:00:39Z
1
0
transformers
[ "transformers", "pytorch", "satflow", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: mit tags: - satflow --- # Model Card for MetNet ## Model description [More information needed] ## Intended uses & limitations [More information needed] ## How to use [More information needed] ## Limitations and bias [More information needed] ## Training data [More information needed] ## Training procedure [More information needed] ## Evaluation results [More information needed]
bayartsogt/mlub-bert-base-uncased-tr5meaning
bayartsogt
2021-09-05T23:54:10Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
|fold|accuracy| |-|-| | fold 0 | 0.974197247706422 | | fold 1 | 0.9627293577981652 | | fold 2 | 0.9724770642201835 | | fold 3 | 0.9696100917431193 | | fold 4 | 0.9684633027522935 | | OOF Acc | 0.9694954128440367 |
mwesner/reformer-clm
mwesner
2021-09-05T13:44:41Z
5
0
transformers
[ "transformers", "pytorch", "reformer", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- model-index: - name: reformer-clm --- ## reformer-clm This casual language model was trained from scratch on CNN/Dailymail dataset. It achieves the following results on the evaluation set: - Loss: 2.7783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 3.8321 | 1.0 | 18412 | 3.8074 | | 3.4965 | 2.0 | 36824 | 3.4223 | | 3.1927 | 3.0 | 55236 | 3.0815 | | 3.046 | 4.0 | 73648 | 2.9270 | | 2.9781 | 5.0 | 92060 | 2.8515 | | 2.9398 | 6.0 | 110472 | 2.8082 | | 2.9293 | 7.0 | 128884 | 2.7904 | | 2.9212 | 8.0 | 147296 | 2.7817 | | 2.9169 | 9.0 | 165708 | 2.7787 | | 2.9197 | 10.0 | 184120 | 2.7783 | ### Framework versions - Transformers 4.6.1 - Pytorch 1.9.0 - Datasets 1.2.1 - Tokenizers 0.10.3
bayartsogt/mlub-bert-large-cased-tr5do30ep25s42
bayartsogt
2021-09-05T11:29:06Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
|fold|accuracy| |-|-| | fold 0 | 0.9730504587155964 | | fold 1 | 0.9690366972477065 | | fold 2 | 0.970756880733945 | | fold 3 | 0.9684633027522935 | | fold 4 | 0.9719036697247706 | | OOF Acc | 0.9706422018348624 |
MaryaAI/opus-mt-en-ro-finetuned-en-to-ro
MaryaAI
2021-09-05T08:42:06Z
5
1
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "dataset:wmt16", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: opus-mt-en-ro-finetuned-en-to-ro results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt16 type: wmt16 args: ro-en metrics: - name: Bleu type: bleu value: 28.1599 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-ro-finetuned-en-to-ro This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.2886 - Bleu: 28.1599 - Gen Len: 34.1236 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 0.7437 | 1.0 | 38145 | 1.2886 | 28.1599 | 34.1236 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3