modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-24 12:28:46
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
493 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-24 12:27:57
card
stringlengths
11
1.01M
huggingartists/pyrokinesis
huggingartists
2021-09-10T16:27:05Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/pyrokinesis", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/pyrokinesis tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/e701c222dfb8725065dd99c8a43988da.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">​​pyrokinesis</div> <a href="https://genius.com/artists/pyrokinesis"> <div style="text-align: center; font-size: 14px;">@pyrokinesis</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from ​​pyrokinesis. Dataset is available [here](https://huggingface.co/datasets/huggingartists/pyrokinesis). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/pyrokinesis") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1s8696f3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on ​​pyrokinesis's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/22hm2utc) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/22hm2utc/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/pyrokinesis') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/pyrokinesis") model = AutoModelWithLMHead.from_pretrained("huggingartists/pyrokinesis") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/lil-peep
huggingartists
2021-09-10T14:54:32Z
4
1
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/lil-peep", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/lil-peep tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/919c7ba130d3861740cbe7fbd7f83c59.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Lil Peep</div> <a href="https://genius.com/artists/lil-peep"> <div style="text-align: center; font-size: 14px;">@lil-peep</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Lil Peep. Dataset is available [here](https://huggingface.co/datasets/huggingartists/lil-peep). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/lil-peep") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/39q6kspr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Lil Peep's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/g0nxk974) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/g0nxk974/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/lil-peep') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/lil-peep") model = AutoModelWithLMHead.from_pretrained("huggingartists/lil-peep") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/burzum
huggingartists
2021-09-10T13:30:58Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/burzum", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/burzum tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/62edc981d303447265d23a3862abce43.589x589x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Burzum</div> <a href="https://genius.com/artists/burzum"> <div style="text-align: center; font-size: 14px;">@burzum</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Burzum. Dataset is available [here](https://huggingface.co/datasets/huggingartists/burzum). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/burzum") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/j34qgww2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Burzum's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3579mrib) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3579mrib/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/burzum') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/burzum") model = AutoModelWithLMHead.from_pretrained("huggingartists/burzum") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/scriptonite
huggingartists
2021-09-10T13:10:06Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/scriptonite", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/scriptonite tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/411d50392aef867fe0e9dd55a074ecfb.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Скриптонит (Scriptonite)</div> <a href="https://genius.com/artists/scriptonite"> <div style="text-align: center; font-size: 14px;">@scriptonite</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Скриптонит (Scriptonite). Dataset is available [here](https://huggingface.co/datasets/huggingartists/scriptonite). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/scriptonite") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/13pxeww0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Скриптонит (Scriptonite)'s lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1itfp830) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1itfp830/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/scriptonite') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/scriptonite") model = AutoModelWithLMHead.from_pretrained("huggingartists/scriptonite") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/25-17
huggingartists
2021-09-10T12:55:59Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/25-17", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/25-17 tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/4fedc5dd2830a874a5274bf1cac62002.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">25/17</div> <a href="https://genius.com/artists/25-17"> <div style="text-align: center; font-size: 14px;">@25-17</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from 25/17. Dataset is available [here](https://huggingface.co/datasets/huggingartists/25-17). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/25-17") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1iuytbjp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on 25/17's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/knv4l4gw) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/knv4l4gw/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/25-17') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/25-17") model = AutoModelWithLMHead.from_pretrained("huggingartists/25-17") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingtweets/freakytheory-insprepositive-masterythink
huggingtweets
2021-09-10T12:25:07Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/freakytheory-insprepositive-masterythink/1631276702724/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1155938695662505984/H3RmD4Fq_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/861903051669610496/dvuuio0A_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1362638938549018626/O2jBlckS_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Inspiring Quotes - Be Positive & Motivation & Motivation & Success</div> <div style="text-align: center; font-size: 14px;">@freakytheory-insprepositive-masterythink</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Inspiring Quotes - Be Positive & Motivation & Motivation & Success. | Data | Inspiring Quotes - Be Positive | Motivation | Motivation & Success | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 3233 | 706 | | Retweets | 789 | 13 | 4 | | Short tweets | 2 | 10 | 14 | | Tweets kept | 2459 | 3210 | 688 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3aupxbxm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @freakytheory-insprepositive-masterythink's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/p03go3pp) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/p03go3pp/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/freakytheory-insprepositive-masterythink') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Riser/YOLOP
Riser
2021-09-10T09:08:34Z
0
9
null
[ "object-detection", "arxiv:2108.11250", "arxiv:1612.07695", "arxiv:1606.02147", "region:us" ]
object-detection
2022-03-02T23:29:04Z
--- tags: - object-detection --- <div align="left"> ## You Only Look Once for Panoptic ​ Driving Perception > [**You Only Look at Once for Panoptic driving Perception**](https://arxiv.org/abs/2108.11250) > > by Dong Wu, Manwen Liao, Weitian Zhang, [Xinggang Wang](https://xinggangw.info/) [*School of EIC, HUST*](http://eic.hust.edu.cn/English/Home.htm) > > *arXiv technical report ([arXiv 2108.11250](https://arxiv.org/abs/2108.11250))* --- ### The Illustration of YOLOP ![yolop](pictures/yolop.png) ### Contributions * We put forward an efficient multi-task network that can jointly handle three crucial tasks in autonomous driving: object detection, drivable area segmentation and lane detection to save computational costs, reduce inference time as well as improve the performance of each task. Our work is the first to reach real-time on embedded devices while maintaining state-of-the-art level performance on the `BDD100K `dataset. * We design the ablative experiments to verify the effectiveness of our multi-tasking scheme. It is proved that the three tasks can be learned jointly without tedious alternating optimization. ### Results #### Traffic Object Detection Result | Model | Recall(%) | mAP50(%) | Speed(fps) | | -------------- | --------- | -------- | ---------- | | `Multinet` | 81.3 | 60.2 | 8.6 | | `DLT-Net` | 89.4 | 68.4 | 9.3 | | `Faster R-CNN` | 77.2 | 55.6 | 5.3 | | `YOLOv5s` | 86.8 | 77.2 | 82 | | `YOLOP(ours)` | 89.2 | 76.5 | 41 | #### Drivable Area Segmentation Result | Model | mIOU(%) | Speed(fps) | | ------------- | ------- | ---------- | | `Multinet` | 71.6 | 8.6 | | `DLT-Net` | 71.3 | 9.3 | | `PSPNet` | 89.6 | 11.1 | | `YOLOP(ours)` | 91.5 | 41 | #### Lane Detection Result: | Model | mIOU(%) | IOU(%) | | ------------- | ------- | ------ | | `ENet` | 34.12 | 14.64 | | `SCNN` | 35.79 | 15.84 | | `ENet-SAD` | 36.56 | 16.02 | | `YOLOP(ours)` | 70.50 | 26.20 | #### Ablation Studies 1: End-to-end v.s. Step-by-step: | Training_method | Recall(%) | AP(%) | mIoU(%) | Accuracy(%) | IoU(%) | | --------------- | --------- | ----- | ------- | ----------- | ------ | | `ES-W` | 87.0 | 75.3 | 90.4 | 66.8 | 26.2 | | `ED-W` | 87.3 | 76.0 | 91.6 | 71.2 | 26.1 | | `ES-D-W` | 87.0 | 75.1 | 91.7 | 68.6 | 27.0 | | `ED-S-W` | 87.5 | 76.1 | 91.6 | 68.0 | 26.8 | | `End-to-end` | 89.2 | 76.5 | 91.5 | 70.5 | 26.2 | #### Ablation Studies 2: Multi-task v.s. Single task: | Training_method | Recall(%) | AP(%) | mIoU(%) | Accuracy(%) | IoU(%) | Speed(ms/frame) | | --------------- | --------- | ----- | ------- | ----------- | ------ | --------------- | | `Det(only)` | 88.2 | 76.9 | - | - | - | 15.7 | | `Da-Seg(only)` | - | - | 92.0 | - | - | 14.8 | | `Ll-Seg(only)` | - | - | - | 79.6 | 27.9 | 14.8 | | `Multitask` | 89.2 | 76.5 | 91.5 | 70.5 | 26.2 | 24.4 | **Notes**: - The works we has use for reference including `Multinet` ([paper](https://arxiv.org/pdf/1612.07695.pdf?utm_campaign=affiliate-ir-Optimise%20media%28%20South%20East%20Asia%29%20Pte.%20ltd._156_-99_national_R_all_ACQ_cpa_en&utm_content=&utm_source=%20388939),[code](https://github.com/MarvinTeichmann/MultiNet)),`DLT-Net` ([paper](https://ieeexplore.ieee.org/abstract/document/8937825)),`Faster R-CNN` ([paper](https://proceedings.neurips.cc/paper/2015/file/14bfa6bb14875e45bba028a21ed38046-Paper.pdf),[code](https://github.com/ShaoqingRen/faster_rcnn)),`YOLOv5s`([code](https://github.com/ultralytics/yolov5)) ,`PSPNet`([paper](https://openaccess.thecvf.com/content_cvpr_2017/papers/Zhao_Pyramid_Scene_Parsing_CVPR_2017_paper.pdf),[code](https://github.com/hszhao/PSPNet)) ,`ENet`([paper](https://arxiv.org/pdf/1606.02147.pdf),[code](https://github.com/osmr/imgclsmob)) `SCNN`([paper](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/download/16802/16322),[code](https://github.com/XingangPan/SCNN)) `SAD-ENet`([paper](https://openaccess.thecvf.com/content_ICCV_2019/papers/Hou_Learning_Lightweight_Lane_Detection_CNNs_by_Self_Attention_Distillation_ICCV_2019_paper.pdf),[code](https://github.com/cardwing/Codes-for-Lane-Detection)). Thanks for their wonderful works. - In table 4, E, D, S and W refer to Encoder, Detect head, two Segment heads and whole network. So the Algorithm (First, we only train Encoder and Detect head. Then we freeze the Encoder and Detect head as well as train two Segmentation heads. Finally, the entire network is trained jointly for all three tasks.) can be marked as ED-S-W, and the same for others. --- ### Visualization #### Traffic Object Detection Result ![detect result](pictures/detect.png) #### Drivable Area Segmentation Result ![](pictures/da.png) #### Lane Detection Result ![](pictures/ll.png) **Notes**: - The visualization of lane detection result has been post processed by quadratic fitting. --- ### Project Structure ```python ├─inference │ ├─images # inference images │ ├─output # inference result ├─lib │ ├─config/default # configuration of training and validation │ ├─core │ │ ├─activations.py # activation function │ │ ├─evaluate.py # calculation of metric │ │ ├─function.py # training and validation of model │ │ ├─general.py #calculation of metric、nms、conversion of data-format、visualization │ │ ├─loss.py # loss function │ │ ├─postprocess.py # postprocess(refine da-seg and ll-seg, unrelated to paper) │ ├─dataset │ │ ├─AutoDriveDataset.py # Superclass dataset,general function │ │ ├─bdd.py # Subclass dataset,specific function │ │ ├─hust.py # Subclass dataset(Campus scene, unrelated to paper) │ │ ├─convect.py │ │ ├─DemoDataset.py # demo dataset(image, video and stream) │ ├─models │ │ ├─YOLOP.py # Setup and Configuration of model │ │ ├─light.py # Model lightweight(unrelated to paper, zwt) │ │ ├─commom.py # calculation module │ ├─utils │ │ ├─augmentations.py # data augumentation │ │ ├─autoanchor.py # auto anchor(k-means) │ │ ├─split_dataset.py # (Campus scene, unrelated to paper) │ │ ├─utils.py # logging、device_select、time_measure、optimizer_select、model_save&initialize 、Distributed training │ ├─run │ │ ├─dataset/training time # Visualization, logging and model_save ├─tools │ │ ├─demo.py # demo(folder、camera) │ │ ├─test.py │ │ ├─train.py ├─toolkits │ │ ├─depoly # Deployment of model ├─weights # Pretraining model ``` --- ### Requirement This codebase has been developed with python version 3.7, PyTorch 1.7+ and torchvision 0.8+: ``` conda install pytorch==1.7.0 torchvision==0.8.0 cudatoolkit=10.2 -c pytorch ``` See `requirements.txt` for additional dependencies and version requirements. ```setup pip install -r requirements.txt ``` ### Data preparation #### Download - Download the images from [images](https://bdd-data.berkeley.edu/). - Download the annotations of detection from [det_annotations](https://drive.google.com/file/d/1Ge-R8NTxG1eqd4zbryFo-1Uonuh0Nxyl/view?usp=sharing). - Download the annotations of drivable area segmentation from [da_seg_annotations](https://drive.google.com/file/d/1xy_DhUZRHR8yrZG3OwTQAHhYTnXn7URv/view?usp=sharing). - Download the annotations of lane line segmentation from [ll_seg_annotations](https://drive.google.com/file/d/1lDNTPIQj_YLNZVkksKM25CvCHuquJ8AP/view?usp=sharing). We recommend the dataset directory structure to be the following: ``` # The id represent the correspondence relation ├─dataset root │ ├─images │ │ ├─train │ │ ├─val │ ├─det_annotations │ │ ├─train │ │ ├─val │ ├─da_seg_annotations │ │ ├─train │ │ ├─val │ ├─ll_seg_annotations │ │ ├─train │ │ ├─val ``` Update the your dataset path in the `./lib/config/default.py`. ### Training You can set the training configuration in the `./lib/config/default.py`. (Including: the loading of preliminary model, loss, data augmentation, optimizer, warm-up and cosine annealing, auto-anchor, training epochs, batch_size). If you want try alternating optimization or train model for single task, please modify the corresponding configuration in `./lib/config/default.py` to `True`. (As following, all configurations is `False`, which means training multiple tasks end to end). ```python # Alternating optimization _C.TRAIN.SEG_ONLY = False # Only train two segmentation branchs _C.TRAIN.DET_ONLY = False # Only train detection branch _C.TRAIN.ENC_SEG_ONLY = False # Only train encoder and two segmentation branchs _C.TRAIN.ENC_DET_ONLY = False # Only train encoder and detection branch # Single task _C.TRAIN.DRIVABLE_ONLY = False # Only train da_segmentation task _C.TRAIN.LANE_ONLY = False # Only train ll_segmentation task _C.TRAIN.DET_ONLY = False # Only train detection task ``` Start training: ```shell python tools/train.py ``` ### Evaluation You can set the evaluation configuration in the `./lib/config/default.py`. (Including: batch_size and threshold value for nms). Start evaluating: ```shell python tools/test.py --weights weights/End-to-end.pth ``` ### Demo Test We provide two testing method. #### Folder You can store the image or video in `--source`, and then save the reasoning result to `--save-dir` ```shell python tools/demo --source inference/images ``` #### Camera If there are any camera connected to your computer, you can set the `source` as the camera number(The default is 0). ```shell python tools/demo --source 0 ``` ### Deployment Our model can reason in real-time on `Jetson Tx2`, with `Zed Camera` to capture image. We use `TensorRT` tool for speeding up. We provide code for deployment and reasoning of model in `./toolkits/deploy`. ## Citation If you find our paper and code useful for your research, please consider giving a star and citation: ```BibTeX @misc{2108.11250, Author = {Dong Wu and Manwen Liao and Weitian Zhang and Xinggang Wang}, Title = {YOLOP: You Only Look Once for Panoptic Driving Perception}, Year = {2021}, Eprint = {arXiv:2108.11250}, } ```
huggingartists/agata-christie
huggingartists
2021-09-10T09:07:11Z
10
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/agata-christie", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/agata-christie tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/61b6b0a0b7f6587d1b33542d5c18ad3c.489x489x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Агата Кристи (Agata Christie)</div> <a href="https://genius.com/artists/agata-christie"> <div style="text-align: center; font-size: 14px;">@agata-christie</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Агата Кристи (Agata Christie). Dataset is available [here](https://huggingface.co/datasets/huggingartists/agata-christie). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/agata-christie") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1dtf6ia5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Агата Кристи (Agata Christie)'s lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/q27fvz1h) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/q27fvz1h/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/agata-christie') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/agata-christie") model = AutoModelWithLMHead.from_pretrained("huggingartists/agata-christie") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/the-velvet-underground
huggingartists
2021-09-10T09:04:08Z
6
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/the-velvet-underground", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/the-velvet-underground tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://s3.amazonaws.com/rapgenius/vu.jpeg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">The Velvet Underground</div> <a href="https://genius.com/artists/the-velvet-underground"> <div style="text-align: center; font-size: 14px;">@the-velvet-underground</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from The Velvet Underground. Dataset is available [here](https://huggingface.co/datasets/huggingartists/the-velvet-underground). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/the-velvet-underground") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/lbkqy84q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on The Velvet Underground's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1e4s74q4) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1e4s74q4/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/the-velvet-underground') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/the-velvet-underground") model = AutoModelWithLMHead.from_pretrained("huggingartists/the-velvet-underground") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/enigma
huggingartists
2021-09-10T08:57:05Z
6
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/enigma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/enigma tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/4b5472082f220eb9c2ca6b22f4d12f45.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Enigma</div> <a href="https://genius.com/artists/enigma"> <div style="text-align: center; font-size: 14px;">@enigma</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Enigma. Dataset is available [here](https://huggingface.co/datasets/huggingartists/enigma). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/enigma") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/8bx90lw6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Enigma's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1c1t20ji) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1c1t20ji/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/enigma') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/enigma") model = AutoModelWithLMHead.from_pretrained("huggingartists/enigma") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/mf-doom
huggingartists
2021-09-10T07:07:44Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/mf-doom", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/mf-doom tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/263743633b6e58854e753b25dca6beab.430x430x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">MF DOOM</div> <a href="https://genius.com/artists/mf-doom"> <div style="text-align: center; font-size: 14px;">@mf-doom</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from MF DOOM. Dataset is available [here](https://huggingface.co/datasets/huggingartists/mf-doom). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/mf-doom") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3lhrsfds/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on MF DOOM's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/vw48qbeh) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/vw48qbeh/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/mf-doom') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/mf-doom") model = AutoModelWithLMHead.from_pretrained("huggingartists/mf-doom") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/yung-plague
huggingartists
2021-09-10T06:49:38Z
6
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/yung-plague", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/yung-plague tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/6c0f8e02f467c694379f242ea2897efd.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Yung Plague</div> <a href="https://genius.com/artists/yung-plague"> <div style="text-align: center; font-size: 14px;">@yung-plague</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Yung Plague. Dataset is available [here](https://huggingface.co/datasets/huggingartists/yung-plague). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/yung-plague") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/9hz73kye/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Yung Plague's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/28boe4q8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/28boe4q8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/yung-plague') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/yung-plague") model = AutoModelWithLMHead.from_pretrained("huggingartists/yung-plague") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
eugenesiow/rcan-bam
eugenesiow
2021-09-09T07:01:39Z
33
0
transformers
[ "transformers", "RCAN", "super-image", "image-super-resolution", "dataset:eugenesiow/Div2k", "dataset:eugenesiow/Set5", "dataset:eugenesiow/Set14", "dataset:eugenesiow/BSD100", "dataset:eugenesiow/Urban100", "arxiv:1807.02758", "arxiv:2104.07566", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - super-image - image-super-resolution datasets: - eugenesiow/Div2k - eugenesiow/Set5 - eugenesiow/Set14 - eugenesiow/BSD100 - eugenesiow/Urban100 metrics: - pnsr - ssim --- # Residual Channel Attention Networks (RCAN) RCAN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Image Super-Resolution Using Very Deep Residual Channel Attention Networks](https://arxiv.org/abs/1807.02758) by Zhang et al. (2018) and first released in [this repository](https://github.com/yulunzhang/RCAN). The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling. ![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4](images/rcan_4_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4") ## Model description Convolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR). However, we observe that deeper networks for image SR are more difficult to train. The low-resolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs. To solve these problems, we propose the very deep residual channel attention networks (RCAN). Specifically, we propose a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections. Each residual group contains some residual blocks with short skip connections. Meanwhile, RIR allows abundant low-frequency information to be bypassed through multiple skip connections, making the main network focus on learning high-frequency information. Furthermore, we propose a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels. Extensive experiments show that our RCAN achieves better accuracy and visual improvements against state-of-the-art methods. This model also applies the balanced attention (BAM) method invented by [Wang et al. (2021)](https://arxiv.org/abs/2104.07566) to further improve the results. ## Intended uses & limitations You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset. ### How to use The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library: ```bash pip install super-image ``` Here is how to use a pre-trained model to upscale your image: ```python from super_image import RcanModel, ImageLoader from PIL import Image import requests url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg' image = Image.open(requests.get(url, stream=True).raw) model = RcanModel.from_pretrained('eugenesiow/rcan-bam', scale=2) # scale 2, 3 and 4 models available inputs = ImageLoader.load_image(image) preds = model(inputs) ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png` ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab") ## Training data The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900). ## Training procedure ### Preprocessing We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566). Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times. During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches. Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image. We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data: ```bash pip install datasets ``` The following code gets the data and preprocesses/augments the data. ```python from datasets import load_dataset from super_image.data import EvalDataset, TrainDataset, augment_five_crop augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\ .map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader ``` ### Pretraining The model was trained on GPU. The training code is provided below: ```python from super_image import Trainer, TrainingArguments, RcanModel, RcanConfig training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=1000, # total number of training epochs ) config = RcanConfig( scale=4, # train a model to upscale 4x bam=True, # apply balanced attention to the network ) model = RcanModel(config) trainer = Trainer( model=model, # the instantiated model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=eval_dataset # evaluation dataset ) trainer.train() ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab") ## Evaluation results The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm). Evaluation datasets include: - Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5) - Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14) - BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100) - Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100) The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline. |Dataset |Scale |Bicubic |rcan-bam | |--- |--- |--- |--- | |Set5 |2x |33.64/0.9292 |**** | |Set5 |3x |30.39/0.8678 |**** | |Set5 |4x |28.42/0.8101 |**30.8/0.8701** | |Set14 |2x |30.22/0.8683 |**** | |Set14 |3x |27.53/0.7737 |**** | |Set14 |4x |25.99/0.7023 |**27.91/0.7648** | |BSD100 |2x |29.55/0.8425 |**** | |BSD100 |3x |27.20/0.7382 |**** | |BSD100 |4x |25.96/0.6672 |**27.91/0.7477** | |Urban100 |2x |26.66/0.8408 |**** | |Urban100 |3x | |**** | |Urban100 |4x |23.14/0.6573 |**24.75/0.7346** | ![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2](images/rcan_2_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2") You can find a notebook to easily run evaluation on pretrained models below: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab") ## BibTeX entry and citation info ```bibtex @misc{wang2021bam, title={BAM: A Lightweight and Efficient Balanced Attention Mechanism for Single Image Super Resolution}, author={Fanyi Wang and Haotian Hu and Cheng Shen}, year={2021}, eprint={2104.07566}, archivePrefix={arXiv}, primaryClass={eess.IV} } ``` ```bibtex @misc{zhang2018image, title={Image Super-Resolution Using Very Deep Residual Channel Attention Networks}, author={Yulun Zhang and Kunpeng Li and Kai Li and Lichen Wang and Bineng Zhong and Yun Fu}, year={2018}, eprint={1807.02758}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
rizky22/IndoBERT
rizky22
2021-09-09T05:33:05Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
https://sites.google.com/view/watchonline-full-hd-we-need-to/ https://sites.google.com/view/watch-hdthegateway2021fullmovi/ https://sites.google.com/view/downloadwatch-hdwildindian2021/ https://sites.google.com/view/putlocker123movieswatchkaren20/ https://sites.google.com/view/full-hdzone4142021moviewatchon/ https://sites.google.com/view/watch-hdmalignant2021onlinemov/ https://sites.google.com/view/watch-the-card-counter-2021-fu/ https://sites.google.com/view/queenpins2021onlinemoviefullhd/ https://sites.google.com/view/watch-hdsmallenginerepair2021f/ https://sites.google.com/view/shang-chi-watch/ https://sites.google.com/view/watch-vivo2021-online-free/ https://sites.google.com/view/watch-free-guy-download/ https://sites.google.com/view/hd-yakuza-princess-20/ https://www.metooo.io/e/watch-free-blue-bayou-2021-hd-movies-full-online-4k-uhd https://www.metooo.io/e/123movies-hd-watch-the-card-counter-online-movie-2021-full-free-download0 https://www.peacefirst.org/user-profile/cry-macho-2021-movie-online-full-hd-1 https://ok.ru/group/63840774127847/topic/153545931483367 https://medium.com/@arbor.hooper/123movies-watch-the-card-counter-2021-movie-online-full-free-download-1382366cc20a http://perencanaan.setjen.pertanian.go.id/index.php/forum/baca/123movies-watch-we-need-to-do-something-2021-movie-online-full-free-download-in-hd
huggingtweets/nyjetstfmedia
huggingtweets
2021-09-08T23:03:30Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1311073545955540992/rbt45-4D_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Harrison Glaser</div> <div style="text-align: center; font-size: 14px;">@nyjetstfmedia</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Harrison Glaser. | Data | Harrison Glaser | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 370 | | Short tweets | 795 | | Tweets kept | 2085 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/19vyig3e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nyjetstfmedia's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/31aujunb) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/31aujunb/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/nyjetstfmedia') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
elisno/is_core_web_trf
elisno
2021-09-08T21:19:54Z
4
0
spacy
[ "spacy", "token-classification", "is", "model-index", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - spacy - token-classification language: - is model-index: - name: is_core_web_trf results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.9193318395 - name: NER Recall type: recall value: 0.9217728758 - name: NER F Score type: f_score value: 0.9205507394 --- | Feature | Description | | --- | --- | | **Name** | `is_core_web_trf` | | **Version** | `0.0.0` | | **spaCy** | `>=3.1.1,<3.2.0` | | **Default Pipeline** | `transformer`, `ner`, `tagger`, `parser` | | **Components** | `transformer`, `ner`, `tagger`, `parser` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (591 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `Date`, `Location`, `Miscellaneous`, `Money`, `Organization`, `Percent`, `Person`, `Time` | | **`tagger`** | `aa`, `aae`, `aam`, `af`, `afe`, `afm`, `au`, `c`, `cn`, `ct`, `e`, `fahee`, `fahen`, `faheo`, `faheþ`, `fahfe`, `fahfn`, `fahfo`, `fahfþ`, `fakee`, `faken`, `fakeo`, `fakeþ`, `fakfe`, `fakfn`, `fakfo`, `fakfþ`, `favee`, `faven`, `faveo`, `faveþ`, `favfe`, `favfn`, `favfo`, `favfþ`, `fbhee`, `fbhen`, `fbheo`, `fbheþ`, `fbhfe`, `fbhfn`, `fbhfo`, `fbhfþ`, `fbkee`, `fbken`, `fbkeo`, `fbkeþ`, `fbkfe`, `fbkfn`, `fbkfo`, `fbkfþ`, `fbvee`, `fbven`, `fbveo`, `fbveþ`, `fbvfe`, `fbvfn`, `fbvfo`, `fbvfþ`, `fehee`, `fehen`, `feheo`, `feheþ`, `fehfe`, `fehfn`, `fehfo`, `fehfþ`, `fekee`, `feken`, `fekeo`, `fekeþ`, `fekfe`, `fekfn`, `fekfo`, `fekfþ`, `fevee`, `feven`, `feveo`, `feveþ`, `fevfe`, `fevfn`, `fevfo`, `fevfþ`, `fohee`, `fohen`, `foheo`, `foheþ`, `fohfe`, `fohfn`, `fohfo`, `fohfþ`, `fokee`, `foken`, `fokeo`, `fokeþ`, `fokfe`, `fokfn`, `fokfo`, `fokfþ`, `fovee`, `foven`, `foveo`, `foveþ`, `fovfe`, `fovfn`, `fovfo`, `fovfþ`, `fp1ee`, `fp1en`, `fp1eo`, `fp1eþ`, `fp1fe`, `fp1fn`, `fp1fo`, `fp1fþ`, `fp2ee`, `fp2en`, `fp2eo`, `fp2eþ`, `fp2fe`, `fp2fn`, `fp2fo`, `fp2fþ`, `fphee`, `fphen`, `fpheo`, `fpheþ`, `fphfe`, `fphfn`, `fphfo`, `fphfþ`, `fpkee`, `fpken`, `fpkeo`, `fpkeþ`, `fpkfe`, `fpkfn`, `fpkfo`, `fpkfþ`, `fpvee`, `fpven`, `fpveo`, `fpveþ`, `fpvfe`, `fpvfn`, `fpvfo`, `fpvfþ`, `fshee`, `fshen`, `fsheo`, `fsheþ`, `fshfe`, `fshfn`, `fshfo`, `fshfþ`, `fskee`, `fsken`, `fskeo`, `fskeþ`, `fskfe`, `fskfn`, `fskfo`, `fskfþ`, `fsvee`, `fsven`, `fsveo`, `fsveþ`, `fsvfe`, `fsvfn`, `fsvfo`, `fsvfþ`, `ghee`, `ghen`, `gheo`, `gheþ`, `ghfe`, `ghfn`, `ghfo`, `ghfþ`, `gkee`, `gken`, `gkeo`, `gkeþ`, `gkfe`, `gkfn`, `gkfo`, `gkfþ`, `gvee`, `gven`, `gveo`, `gveþ`, `gvfe`, `gvfn`, `gvfo`, `gvfþ`, `ks`, `kt`, `lheeof`, `lheesf`, `lheeve`, `lheevf`, `lheevm`, `lhenof`, `lhense`, `lhensf`, `lhenve`, `lhenvf`, `lhenvm`, `lheoof`, `lheose`, `lheosf`, `lheosm`, `lheove`, `lheovf`, `lheovm`, `lheþof`, `lheþse`, `lheþsf`, `lheþve`, `lheþvf`, `lheþvm`, `lhfeof`, `lhfese`, `lhfesf`, `lhfeve`, `lhfevf`, `lhfevm`, `lhfnof`, `lhfnse`, `lhfnsf`, `lhfnve`, `lhfnvf`, `lhfnvm`, `lhfoof`, `lhfose`, `lhfosf`, `lhfove`, `lhfovf`, `lhfovm`, `lhfþof`, `lhfþse`, `lhfþsf`, `lhfþve`, `lhfþvf`, `lhfþvm`, `lkeeof`, `lkeesf`, `lkeeve`, `lkeevf`, `lkeevm`, `lkenof`, `lkense`, `lkensf`, `lkenve`, `lkenvf`, `lkenvm`, `lkeoof`, `lkeose`, `lkeosf`, `lkeove`, `lkeovf`, `lkeovm`, `lkeþof`, `lkeþse`, `lkeþsf`, `lkeþve`, `lkeþvf`, `lkeþvm`, `lkfeof`, `lkfese`, `lkfesf`, `lkfeve`, `lkfevf`, `lkfevm`, `lkfnof`, `lkfnse`, `lkfnsf`, `lkfnve`, `lkfnvf`, `lkfnvm`, `lkfoof`, `lkfose`, `lkfosf`, `lkfove`, `lkfovf`, `lkfovm`, `lkfþof`, `lkfþse`, `lkfþsf`, `lkfþsm`, `lkfþve`, `lkfþvf`, `lkfþvm`, `lveeof`, `lveese`, `lveesf`, `lveeve`, `lveevf`, `lveevm`, `lvenof`, `lvense`, `lvensf`, `lvenve`, `lvenvf`, `lvenvm`, `lveoof`, `lveose`, `lveosf`, `lveove`, `lveovf`, `lveovm`, `lveþof`, `lveþse`, `lveþsf`, `lveþve`, `lveþvf`, `lveþvm`, `lvfeof`, `lvfese`, `lvfesf`, `lvfeve`, `lvfevf`, `lvfevm`, `lvfnof`, `lvfnse`, `lvfnsf`, `lvfnve`, `lvfnvf`, `lvfnvm`, `lvfoof`, `lvfose`, `lvfosf`, `lvfove`, `lvfovf`, `lvfovm`, `lvfþof`, `lvfþse`, `lvfþsf`, `lvfþsm`, `lvfþve`, `lvfþvf`, `lvfþvm`, `m`, `n----s`, `n-ee`, `n-ee-s`, `n-en`, `n-en-s`, `n-eng`, `n-eo`, `n-eo-s`, `n-eþ`, `n-eþ-s`, `n-fn`, `nhee`, `nhee-s`, `nheeg`, `nheegs`, `nhen`, `nhen-s`, `nheng`, `nhengs`, `nheo`, `nheo-s`, `nheog`, `nheogs`, `nheþ`, `nheþ-s`, `nheþg`, `nheþgs`, `nhfe`, `nhfe-s`, `nhfeg`, `nhfegs`, `nhfn`, `nhfn-s`, `nhfng`, `nhfngs`, `nhfo`, `nhfo-s`, `nhfog`, `nhfogs`, `nhfþ`, `nhfþ-s`, `nhfþg`, `nhfþgs`, `nkee`, `nkee-s`, `nkeeg`, `nkeegs`, `nken`, `nken-s`, `nkeng`, `nkengs`, `nkeo`, `nkeo-s`, `nkeog`, `nkeogs`, `nkeþ`, `nkeþ-s`, `nkeþg`, `nkeþgs`, `nkfe`, `nkfe-s`, `nkfeg`, `nkfegs`, `nkfn`, `nkfn-s`, `nkfng`, `nkfngs`, `nkfo`, `nkfo-s`, `nkfog`, `nkfogs`, `nkfþ`, `nkfþ-s`, `nkfþg`, `nkfþgs`, `nvee`, `nvee-s`, `nveeg`, `nveegs`, `nven`, `nven-s`, `nveng`, `nvengs`, `nveo`, `nveo-s`, `nveog`, `nveogs`, `nveþ`, `nveþ-s`, `nveþg`, `nveþgs`, `nvfe`, `nvfe-s`, `nvfeg`, `nvfegs`, `nvfn`, `nvfn-s`, `nvfng`, `nvfngs`, `nvfo`, `nvfo-s`, `nvfog`, `nvfogs`, `nvfþ`, `nvfþ-s`, `nvfþg`, `nvfþgs`, `pa`, `pg`, `pk`, `pl`, `sbg2en`, `sbg2fn`, `sbm2en`, `sbm2fn`, `sfg1en`, `sfg1eþ`, `sfg1fn`, `sfg1fþ`, `sfg2en`, `sfg2eþ`, `sfg2fn`, `sfg2fþ`, `sfg3en`, `sfg3eþ`, `sfg3fn`, `sfg3fþ`, `sfm1en`, `sfm1eþ`, `sfm1fn`, `sfm1fþ`, `sfm2en`, `sfm2eþ`, `sfm2fn`, `sfm2fþ`, `sfm3en`, `sfm3eþ`, `sfm3fn`, `sfm3fþ`, `slg`, `sng`, `snm`, `svg1en`, `svg1eþ`, `svg1fn`, `svg1fþ`, `svg2en`, `svg2eþ`, `svg2fn`, `svg2fþ`, `svg3en`, `svg3eþ`, `svg3fn`, `svg3fþ`, `svm1en`, `svm1eþ`, `svm1fn`, `svm1fþ`, `svm2en`, `svm2eþ`, `svm2fn`, `svm3en`, `svm3eþ`, `svm3fn`, `svm3fþ`, `sþghen`, `sþgheo`, `sþghfn`, `sþghfo`, `sþgken`, `sþgkeo`, `sþgkfn`, `sþgkfo`, `sþgven`, `sþgveo`, `sþgvfn`, `sþgvfo`, `sþgvfþ`, `sþmhen`, `sþmheo`, `sþmken`, `sþmven`, `ta`, `tfhee`, `tfhen`, `tfheo`, `tfheþ`, `tfhfe`, `tfhfn`, `tfhfo`, `tfhfþ`, `tfkee`, `tfken`, `tfkeo`, `tfkeþ`, `tfkfe`, `tfkfn`, `tfkfo`, `tfkfþ`, `tfvee`, `tfven`, `tfveo`, `tfveþ`, `tfvfe`, `tfvfn`, `tfvfo`, `tfvfþ`, `to`, `tp`, `v`, `x` | | **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `dep`, `det`, `fixed`, `flat:name`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nummod`, `obj`, `obl`, `obl:arg`, `parataxis`, `punct`, `xcomp` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 92.06 | | `ENTS_P` | 91.93 | | `ENTS_R` | 92.18 | | `TRANSFORMER_LOSS` | 248325.98 | | `NER_LOSS` | 120059.07 |
Ashkanmh/bert-base-parsbert-uncased-finetuned
Ashkanmh
2021-09-08T20:56:15Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer model-index: - name: bert-base-parsbert-uncased-finetuned results: - task: name: Masked Language Modeling type: fill-mask --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-parsbert-uncased-finetuned This model is a fine-tuned version of [HooshvareLab/bert-base-parsbert-uncased](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2045 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.5596 | 1.0 | 515 | 3.2097 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
huggingtweets/brad_buchsbaum
huggingtweets
2021-09-08T19:43:10Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1393736501838721031/DCd35uGN_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">bbuchsbaum</div> <div style="text-align: center; font-size: 14px;">@brad_buchsbaum</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from bbuchsbaum. | Data | bbuchsbaum | | --- | --- | | Tweets downloaded | 1346 | | Retweets | 125 | | Short tweets | 53 | | Tweets kept | 1168 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/uivlvhob/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @brad_buchsbaum's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34xkida2) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34xkida2/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/brad_buchsbaum') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
LeoCordoba/beto2beto
LeoCordoba
2021-09-08T16:31:21Z
23
0
transformers
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "text-generation", "spanish", "beto", "es", "dataset:LeoCordoba/CC-NEWS-ES", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- language: es tags: - text-generation - spanish - encoder-decoder - beto license: apache-2.0 datasets: - LeoCordoba/CC-NEWS-ES model-index: - name: beto2beto --- ## beto2beto Usage example here: https://colab.research.google.com/drive/18a2ZfF1e_Kyyydlv8INQIkJbv294xcAm?usp=sharing Entrenado por 3 epochs sobre CC-NEWS-ES (2019), aproximadamente 68.000 steps. Encoder max length: 40•Decoder max length: 128 ## Hyperparameters ## Usage ## Results | key | value | | --- | ----- | | test_loss | 2.65148806571960452 |
nateraw/timm-resnet50-beans
nateraw
2021-09-07T17:21:50Z
14
1
timm
[ "timm", "pytorch", "image-classification", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - image-classification - timm library_tag: timm --- # Model card for `timm-resnet50-beans` **TODO** **For now, try dragging and dropping this image into the inference widget. It should classify as angular_leaf_spot.** ![leaf_example](angular_leaf_spot_train.304.jpg)
kamalkraj/bioelectra-base-discriminator-pubmed
kamalkraj
2021-09-07T13:52:16Z
810
6
transformers
[ "transformers", "pytorch", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
## BioELECTRA:Pretrained Biomedical text Encoder using Discriminators Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset. For a detailed description and experimental results, please refer to our paper [BioELECTRA:Pretrained Biomedical text Encoder using Discriminators](https://www.aclweb.org/anthology/2021.bionlp-1.16/). Cite our paper using below citation ``` @inproceedings{kanakarajan-etal-2021-bioelectra, title = "{B}io{ELECTRA}:Pretrained Biomedical text Encoder using Discriminators", author = "Kanakarajan, Kamal raj and Kundumani, Bhuvana and Sankarasubbu, Malaikannan", booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing", month = jun, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.bionlp-1.16", doi = "10.18653/v1/2021.bionlp-1.16", pages = "143--154", abstract = "Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. We apply {`}replaced token detection{'} pretraining technique proposed by ELECTRA and pretrain a biomedical language model from scratch using biomedical text and vocabulary. We introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA for the Biomedical domain. WE evaluate our model on the BLURB and BLUE biomedical NLP benchmarks. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 different NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34{\%}(1.39{\%} accuracy improvement) on MedNLI and 64{\%} (2.98{\%} accuracy improvement) on PubMedQA dataset.", } ``` ## How to use the discriminator in `transformers` ```python from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed") tokenizer = ElectraTokenizerFast.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed") sentence = "The quick brown fox jumps over the lazy dog" fake_sentence = "The quick brown fox fake over the lazy dog" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % int(prediction), end="") for prediction in predictions[0].tolist()] ```
RJ3vans/CLNspanTagger
RJ3vans
2021-09-07T13:24:46Z
4
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
This model identifies compound nouns in input sentences. Try the test sentence: I love apples [and] potatoes. Accuracy is best when you place square brackets around the coordinating conjunction. The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton.
yannobla/Sunshine2
yannobla
2021-09-07T11:41:51Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
>>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("Hello I'm a [MASK] model.") [{'sequence': "[CLS] hello i'm a fashion model. [SEP]", 'score': 0.1073106899857521, 'token': 4827, 'token_str': 'fashion'}, {'sequence': "[CLS] hello i'm a role model. [SEP]", 'score': 0.08774490654468536, 'token': 2535, 'token_str': 'role'}, {'sequence': "[CLS] hello i'm a new model. [SEP]", 'score': 0.05338378623127937, 'token': 2047, 'token_str': 'new'}, {'sequence': "[CLS] hello i'm a super model. [SEP]", 'score': 0.04667217284440994, 'token': 3565, 'token_str': 'super'}, {'sequence': "[CLS] hello i'm a fine model. [SEP]", 'score': 0.027095865458250046, 'token': 2986, 'token_str': 'fine'}] from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input)
pritoms/gpt-neo-125M-finetuned-pgt
pritoms
2021-09-07T08:20:52Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt_neo", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - null model-index: - name: gpt-neo-125M-finetuned-pgt results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt-neo-125M-finetuned-pgt This model is a fine-tuned version of [pritoms/gpt-neo-125M-finetuned-pgt](https://huggingface.co/pritoms/gpt-neo-125M-finetuned-pgt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6026 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 26 | 1.5947 | | No log | 2.0 | 52 | 1.5963 | | No log | 3.0 | 78 | 1.6026 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
espnet/xuankai_chang_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_larg-truncated-5b94d9
espnet
2021-09-07T03:11:55Z
2
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:librispeech", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - librispeech license: cc-by-4.0 inference: false --- # ESPnet2 ASR pretrained model ## `Xuankai Chang/xuankai_chang_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp_25epoch, fs=16k, lang=en` This model was trained by Takashi Maekaku using librispeech recipe in [espnet](https://github.com/espnet/espnet/). ### Python API ```text See https://github.com/espnet/espnet_model_zoo ``` ### Evaluate in the recipe ```python # coming soon ``` ### Results ```bash # RESULTS ## Environments - date: `Sat Jul 3 23:10:19 JST 2021` - python version: `3.7.9 (default, Apr 23 2021, 13:48:31) [GCC 5.5.0 20171010]` - espnet version: `espnet 0.9.9` - pytorch version: `pytorch 1.7.0` - Git hash: `0f7558a716ab830d0c29da8785840124f358d47b` - Commit date: `Tue Jun 8 15:33:49 2021 -0400` ## asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|54402|98.3|1.6|0.2|0.2|1.9|24.9| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|50948|95.1|4.3|0.6|0.4|5.4|42.8| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|52576|98.1|1.7|0.2|0.2|2.2|26.8| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|52343|95.3|4.1|0.6|0.5|5.2|45.8| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|288456|99.5|0.2|0.2|0.2|0.6|24.9| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|265951|98.1|1.0|0.9|0.5|2.4|42.8| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|281530|99.5|0.2|0.3|0.2|0.7|26.8| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|272758|98.3|0.8|0.9|0.5|2.3|45.8| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|68010|97.8|1.6|0.6|0.4|2.6|24.9| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|63110|94.1|4.3|1.6|1.1|7.0|42.8| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|65818|97.6|1.6|0.8|0.4|2.8|26.8| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|65101|94.3|4.0|1.8|1.0|6.7|45.8| ``` ### Training config See full config in [`config.yaml`](./exp/asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp/config.yaml) ```yaml config: conf/tuning/train_asr_conformer7_hubert_960hr_large.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp ngpu: 3 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 3 local_rank: 3 dist_master_addr: localhost dist_master_port: 33643 dist_launcher: null multiprocessing_distributed: true cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true ```
espnet/xuankai_chang_librispeech_asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp_26epoch
espnet
2021-09-07T03:05:41Z
2
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:librispeech", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - librispeech license: cc-by-4.0 inference: false --- # ESPnet2 ASR pretrained model ## `Xuankai Chang/xuankai_chang_librispeech_asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp_26epoch, fs=16k, lang=en` This model was trained by Takashi Maekaku using librispeech recipe in [espnet](https://github.com/espnet/espnet/). ### Python API ```text See https://github.com/espnet/espnet_model_zoo ``` ### Evaluate in the recipe ```python # coming soon ``` ### Results ```bash # RESULTS ## Environments - date: `Fri Aug 6 11:44:39 JST 2021` - python version: `3.7.9 (default, Apr 23 2021, 13:48:31) [GCC 5.5.0 20171010]` - espnet version: `espnet 0.9.9` - pytorch version: `pytorch 1.7.0` - Git hash: `0f7558a716ab830d0c29da8785840124f358d47b` - Commit date: `Tue Jun 8 15:33:49 2021 -0400` ## asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|54402|98.5|1.3|0.2|0.2|1.7|22.1| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|50948|96.8|2.8|0.4|0.3|3.4|33.7| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|52576|98.4|1.4|0.2|0.2|1.8|22.1| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|52343|96.8|2.8|0.4|0.4|3.6|36.0| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|288456|99.6|0.2|0.2|0.2|0.6|22.1| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|265951|98.8|0.6|0.6|0.3|1.5|33.7| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|281530|99.6|0.2|0.2|0.2|0.6|22.1| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|272758|98.9|0.5|0.5|0.4|1.4|36.0| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|68010|98.2|1.3|0.5|0.4|2.2|22.1| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|63110|96.0|2.8|1.2|0.6|4.6|33.7| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|65818|98.1|1.3|0.6|0.4|2.3|22.1| |decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|65101|96.0|2.7|1.3|0.6|4.6|36.0| ``` ### Training config See full config in [`config.yaml`](./exp/asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp/config.yaml) ```yaml config: conf/tuning/train_asr_conformer7_hubert_960hr_large.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp ngpu: 3 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 3 local_rank: 3 dist_master_addr: localhost dist_master_port: 33643 dist_launcher: null multiprocessing_distributed: true cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true ```
huggingtweets/discountpicasso-dril-liam_100000
huggingtweets
2021-09-07T00:14:05Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/discountpicasso-dril-liam_100000/1630973640579/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1426930394297819137/-zzMnfJo_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/980964012170121217/U6FjPH4H_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">LIAM & wint & Picasso</div> <div style="text-align: center; font-size: 14px;">@discountpicasso-dril-liam_100000</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from LIAM & wint & Picasso. | Data | LIAM | wint | Picasso | | --- | --- | --- | --- | | Tweets downloaded | 1962 | 3226 | 3216 | | Retweets | 135 | 472 | 427 | | Short tweets | 435 | 313 | 421 | | Tweets kept | 1392 | 2441 | 2368 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1w4ekve8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @discountpicasso-dril-liam_100000's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2s4a755y) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2s4a755y/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/discountpicasso-dril-liam_100000') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/liam_100000
huggingtweets
2021-09-06T23:32:16Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/liam_100000/1630971132171/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1426930394297819137/-zzMnfJo_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">LIAM</div> <div style="text-align: center; font-size: 14px;">@liam_100000</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from LIAM. | Data | LIAM | | --- | --- | | Tweets downloaded | 1960 | | Retweets | 135 | | Short tweets | 434 | | Tweets kept | 1391 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1sila7bw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @liam_100000's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2bu2qvu3) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2bu2qvu3/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/liam_100000') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/formernumber
huggingtweets
2021-09-06T21:05:59Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/formernumber/1630962355855/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1430593525108903940/vrSks7ph_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">NaN</div> <div style="text-align: center; font-size: 14px;">@formernumber</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from NaN. | Data | NaN | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 146 | | Short tweets | 554 | | Tweets kept | 2550 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1cmch3y4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @formernumber's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2iurxhit) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2iurxhit/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/formernumber') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/jenslennartsson
huggingtweets
2021-09-06T20:01:41Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/jenslennartsson/1630958497152/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1404750730670473221/dKZZf947_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Jens 🧲 | Email Marketing</div> <div style="text-align: center; font-size: 14px;">@jenslennartsson</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Jens 🧲 | Email Marketing. | Data | Jens 🧲 | Email Marketing | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 316 | | Short tweets | 346 | | Tweets kept | 2588 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1kaofe1s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jenslennartsson's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3mdvlzx0) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3mdvlzx0/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/jenslennartsson') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
sv/gpt2-finetuned-nft-shakes-seuss
sv
2021-09-06T19:35:40Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - null model-index: - name: gpt2-finetuned-nft-shakes-seuss results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-finetuned-nft-shakes-seuss This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.8505 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.2178 | 1.0 | 1095 | 4.0073 | | 3.9522 | 2.0 | 2190 | 3.8824 | | 3.8393 | 3.0 | 3285 | 3.8505 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
sv/gpt2-finetuned-nft-shakes
sv
2021-09-06T16:59:11Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - null model-index: - name: gpt2-finetuned-nft-shakes results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-finetuned-nft-shakes This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.7566 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 306 | 3.9679 | | 4.2957 | 2.0 | 612 | 3.7979 | | 4.2957 | 3.0 | 918 | 3.7566 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
huggingtweets/cafe_orbitinnit
huggingtweets
2021-09-06T15:52:25Z
3
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/cafe_orbitinnit/1630943541910/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1429115399975497731/JZdA725e_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">✨たち Tommy’s an Orbit 🌙 たち✨</div> <div style="text-align: center; font-size: 14px;">@cafe_orbitinnit</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ✨たち Tommy’s an Orbit 🌙 たち✨. | Data | ✨たち Tommy’s an Orbit 🌙 たち✨ | | --- | --- | | Tweets downloaded | 2242 | | Retweets | 1336 | | Short tweets | 323 | | Tweets kept | 583 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/qhrvba17/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cafe_orbitinnit's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2qnyhuxd) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2qnyhuxd/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/cafe_orbitinnit') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/matsu_bouzu
huggingtweets
2021-09-06T13:27:36Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/matsu_bouzu/1630934852210/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1398242436082638855/mvzIZACg_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">松本人志</div> <div style="text-align: center; font-size: 14px;">@matsu_bouzu</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 松本人志. | Data | 松本人志 | | --- | --- | | Tweets downloaded | 808 | | Retweets | 30 | | Short tweets | 504 | | Tweets kept | 274 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/fwqkxzg7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @matsu_bouzu's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1af81o1n) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1af81o1n/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/matsu_bouzu') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
lewtun/metnet-test-5
lewtun
2021-09-06T11:01:50Z
2
0
transformers
[ "transformers", "pytorch", "satflow", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: mit tags: - satflow --- # MetNet ## Model description [More information needed] ## Intended uses & limitations [More information needed] ## How to use [More information needed] ## Limitations and bias [More information needed] ## Training data [More information needed] ## Training procedure [More information needed] ## Evaluation results [More information needed]
elisno/is_ner_mim_trf
elisno
2021-09-05T19:26:16Z
4
0
spacy
[ "spacy", "token-classification", "is", "model-index", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - spacy - token-classification language: - is model-index: - name: is_ner_mim_trf results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.9193318395 - name: NER Recall type: recall value: 0.9217728758 - name: NER F Score type: f_score value: 0.9205507394 --- | Feature | Description | | --- | --- | | **Name** | `is_ner_mim_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.1.1,<3.2.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (8 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `Date`, `Location`, `Miscellaneous`, `Money`, `Organization`, `Percent`, `Person`, `Time` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 92.06 | | `ENTS_P` | 91.93 | | `ENTS_R` | 92.18 | | `TRANSFORMER_LOSS` | 248325.98 | | `NER_LOSS` | 120059.07 |
mwesner/reformer-clm
mwesner
2021-09-05T13:44:41Z
5
0
transformers
[ "transformers", "pytorch", "reformer", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- model-index: - name: reformer-clm --- ## reformer-clm This casual language model was trained from scratch on CNN/Dailymail dataset. It achieves the following results on the evaluation set: - Loss: 2.7783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 3.8321 | 1.0 | 18412 | 3.8074 | | 3.4965 | 2.0 | 36824 | 3.4223 | | 3.1927 | 3.0 | 55236 | 3.0815 | | 3.046 | 4.0 | 73648 | 2.9270 | | 2.9781 | 5.0 | 92060 | 2.8515 | | 2.9398 | 6.0 | 110472 | 2.8082 | | 2.9293 | 7.0 | 128884 | 2.7904 | | 2.9212 | 8.0 | 147296 | 2.7817 | | 2.9169 | 9.0 | 165708 | 2.7787 | | 2.9197 | 10.0 | 184120 | 2.7783 | ### Framework versions - Transformers 4.6.1 - Pytorch 1.9.0 - Datasets 1.2.1 - Tokenizers 0.10.3
bayartsogt/mlub-bert-large-uncased-tr5do20ep25s42
bayartsogt
2021-09-05T11:26:54Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
|fold|accuracy| |-|-| | fold 0 | 0.9753440366972477 | | fold 1 | 0.9678899082568807 | | fold 2 | 0.9747706422018348 | | fold 3 | 0.9690366972477065 | | fold 4 | 0.9759174311926605 | | OOF Acc | 0.9725917431192661 |
MaryaAI/opus-mt-en-ro-finetuned-en-to-ro
MaryaAI
2021-09-05T08:42:06Z
5
1
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "dataset:wmt16", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: opus-mt-en-ro-finetuned-en-to-ro results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt16 type: wmt16 args: ro-en metrics: - name: Bleu type: bleu value: 28.1599 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-ro-finetuned-en-to-ro This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.2886 - Bleu: 28.1599 - Gen Len: 34.1236 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 0.7437 | 1.0 | 38145 | 1.2886 | 28.1599 | 34.1236 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
castorini/bpr-nq-ctx-encoder
castorini
2021-09-05T00:57:58Z
4
0
transformers
[ "transformers", "pytorch", "dpr", "arxiv:2106.00882", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
This model is converted from the original BPR [repo](https://github.com/studio-ousia/bpr) and fitted into Pyserini: > Ikuya Yamada, Akari Asai, and Hannaneh Hajishirzi. 2021. Efficient passage retrieval with hashing for open-domain question answering. arXiv:2106.00882.
castorini/bpr-nq-question-encoder
castorini
2021-09-05T00:53:16Z
8
0
transformers
[ "transformers", "pytorch", "dpr", "feature-extraction", "arxiv:2106.00882", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
This model is converted from the original BPR [repo](https://github.com/studio-ousia/bpr) and fitted into Pyserini: > Ikuya Yamada, Akari Asai, and Hannaneh Hajishirzi. 2021. Efficient passage retrieval with hashing for open-domain question answering. arXiv:2106.00882.
devin132/w2v-timit-ft-4001
devin132
2021-09-04T22:35:42Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
# Fintuned Wav2Vec of Timit - 4001 checkpoint
recobo/chemical-bert-uncased-tsdae
recobo
2021-09-04T21:17:19Z
14
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # recobo/chemical-bert-uncased-tsdae ```python from sentence_transformers import SentenceTransformer model_name = 'recobo/chemical-bert-uncased-tsdae' model = SentenceTransformer(model_name) ```
bshlgrs/autonlp-classification_with_all_labellers-9532137
bshlgrs
2021-09-04T21:03:27Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:bshlgrs/autonlp-data-classification_with_all_labellers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - bshlgrs/autonlp-data-classification_with_all_labellers --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 9532137 ## Validation Metrics - Loss: 0.34556105732917786 - Accuracy: 0.8749890724713699 - Macro F1: 0.5243623959669343 - Micro F1: 0.8749890724713699 - Weighted F1: 0.8638030768409057 - Macro Precision: 0.5016762404900895 - Micro Precision: 0.8749890724713699 - Weighted Precision: 0.8547962562614184 - Macro Recall: 0.5529674694200845 - Micro Recall: 0.8749890724713699 - Weighted Recall: 0.8749890724713699 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bshlgrs/autonlp-classification_with_all_labellers-9532137 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("bshlgrs/autonlp-classification_with_all_labellers-9532137", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("bshlgrs/autonlp-classification_with_all_labellers-9532137", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
doyoungkim/bert-base-uncased-finetuned-sst2-sst2-membership
doyoungkim
2021-09-04T20:10:24Z
8
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model_index: name: bert-base-uncased-finetuned-sst2-sst2-membership --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-sst2-sst2-membership This model is a fine-tuned version of [ikevin98/bert-base-uncased-finetuned-sst2](https://huggingface.co/ikevin98/bert-base-uncased-finetuned-sst2) on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 1.3100 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5125 | 1.0 | 3813 | 1.3100 | 1.0 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.8.1 - Datasets 1.11.0 - Tokenizers 0.10.1
superb/wav2vec2-large-superb-ic
superb
2021-09-04T19:52:29Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "audio-classification", "speech", "audio", "en", "dataset:superb", "arxiv:2105.01051", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-02T23:29:05Z
--- language: en datasets: - superb tags: - speech - audio - wav2vec2 license: apache-2.0 --- # Wav2Vec2-Large for Intent Classification ## Model description This is a ported version of [S3PRL's Wav2Vec2 for the SUPERB Intent Classification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/fluent_commands). The base model is [wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60), which is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051) ## Task and dataset description Intent Classification (IC) classifies utterances into predefined classes to determine the intent of speakers. SUPERB uses the [Fluent Speech Commands](https://fluent.ai/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research/) dataset, where each utterance is tagged with three intent labels: **action**, **object**, and **location**. For the original model's training and evaluation instructions refer to the [S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands). ## Usage examples You can use the model directly like so: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForSequenceClassification, Wav2Vec2FeatureExtractor def map_to_array(example): speech, _ = librosa.load(example["file"], sr=16000, mono=True) example["speech"] = speech return example # load a demo dataset and read audio files dataset = load_dataset("anton-l/superb_demo", "ic", split="test") dataset = dataset.map(map_to_array) model = Wav2Vec2ForSequenceClassification.from_pretrained("superb/wav2vec2-large-superb-ic") feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/wav2vec2-large-superb-ic") # compute attention masks and normalize the waveform if needed inputs = feature_extractor(dataset[:4]["speech"], sampling_rate=16000, padding=True, return_tensors="pt") logits = model(**inputs).logits action_ids = torch.argmax(logits[:, :6], dim=-1).tolist() action_labels = [model.config.id2label[_id] for _id in action_ids] object_ids = torch.argmax(logits[:, 6:20], dim=-1).tolist() object_labels = [model.config.id2label[_id + 6] for _id in object_ids] location_ids = torch.argmax(logits[:, 20:24], dim=-1).tolist() location_labels = [model.config.id2label[_id + 20] for _id in location_ids] ``` ## Eval results The evaluation metric is accuracy. | | **s3prl** | **transformers** | |--------|-----------|------------------| |**test**| `0.9528` | `N/A` | ### BibTeX entry and citation info ```bibtex @article{yang2021superb, title={SUPERB: Speech processing Universal PERformance Benchmark}, author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others}, journal={arXiv preprint arXiv:2105.01051}, year={2021} } ```
pritoms/distilgpt2-finetuned-pgt
pritoms
2021-09-04T11:16:01Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - null model-index: - name: distilgpt2-finetuned-pgt results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-pgt This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.0132 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 31 | 5.0513 | | No log | 2.0 | 62 | 5.0175 | | No log | 3.0 | 93 | 5.0132 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
sontn122/xlm-roberta-large-finetuned-squad
sontn122
2021-09-04T08:01:37Z
22
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: xlm-roberta-large-finetuned-squad results: - task: name: Question Answering type: question-answering dataset: name: squad type: squad args: default --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-large-finetuned-squad This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.0350 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6093 | 1.0 | 620 | 1.0023 | | 0.849 | 2.0 | 1240 | 0.9449 | | 0.6693 | 3.0 | 1860 | 1.0350 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
xiaj/test
xiaj
2021-09-04T05:38:09Z
0
0
null
[ "translation", "ru", "en", "dataset:wmt19", "license:apache-2.0", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - ru - en tags: - translation license: apache-2.0 datasets: - wmt19 metrics: - bleu - sacrebleu ---
mrm8488/spanish-t5-small-sqac-for-qa
mrm8488
2021-09-03T10:22:10Z
132
4
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "QA", "Q&A", "es", "dataset:BSC-TeMU/SQAC", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: es tags: - QA - Q&A datasets: - BSC-TeMU/SQAC widget: - text: "question: ¿Cuál es el nombre que se le da a la unidad morfológica y funcional de los seres vivos? context: La célula (del latín cellula, diminutivo de cella, ‘celda’) es la unidad morfológica y funcional de todo ser vivo. De hecho, la célula es el elemento de menor tamaño que puede considerarse vivo.\u200b De este modo, puede clasificarse a los organismos vivos según el número de células que posean: si solo tienen una, se les denomina unicelulares (como pueden ser los protozoos o las bacterias, organismos microscópicos); si poseen más, se les llama pluricelulares. En estos últimos el número de células es variable: de unos pocos cientos, como en algunos nematodos, a cientos de billones (1014), como en el caso del ser humano. Las células suelen poseer un tamaño de 10 µm y una masa de 1 ng, si bien existen células mucho mayores." --- # Spanish T5 (small) fine-tuned on **SQAC** for Spanish **QA** 📖❓ [spanish-T5-small](https://huggingface.co/flax-community/spanish-t5-small) fine-tuned on [SQAC](https://huggingface.co/datasets/BSC-TeMU/SQAC) for **Q&A** downstream task. ## Details of Spanish T5 (small) T5 (small) like arch trained from scatch on [large_spanish_corpus](https://huggingface.co/datasets/large_spanish_corpus) for **HuggingFace/Flax/Jax Week**. ## Details of the dataset 📚 This dataset contains 6,247 contexts and 18,817 questions with their answers, 1 to 5 for each fragment. The sources of the contexts are: * Encyclopedic articles from [Wikipedia in Spanish](https://es.wikipedia.org/), used under [CC-by-sa licence](https://creativecommons.org/licenses/by-sa/3.0/legalcode). * News from [Wikinews in Spanish](https://es.wikinews.org/), used under [CC-by licence](https://creativecommons.org/licenses/by/2.5/). * Text from the Spanish corpus [AnCora](http://clic.ub.edu/corpus/en), which is a mix from diferent newswire and literature sources, used under [CC-by licence](https://creativecommons.org/licenses/by/4.0/legalcode). This dataset can be used to build extractive-QA. ## Results on test dataset 📝 | Metric | # Value | | ------ | --------- | | **BLEU** | **41.94** | ## Model in Action 🚀 ```python from transformers import T5ForConditionalGeneration, AutoTokenizer import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') ckpt = 'mrm8488/spanish-t5-small-sqac-for-qa' tokenizer = AutoTokenizer.from_pretrained(ckpt) model = T5ForConditionalGeneration.from_pretrained(ckpt).to(device) def get_answer(question, context): input_text = 'question: %s context: %s' % (question, context) features = tokenizer([input_text ], padding='max_length', truncation=True, max_length=512, return_tensors='pt') output = model.generate(input_ids=features['input_ids'].to(device), attention_mask=features['attention_mask'].to(device)) return tokenizer.decode(output[0], skip_special_tokens=True) context = ''' La ex codirectora del grupo de investigación de IA ética de Google, Margaret Mitchell, quien fue despedida en febrero después de una controversia sobre un artículo crítico del que fue coautora, se unirá a HuggingFace para ayudar a que los algoritmos de IA sean más justos. ''' question = '¿Qué hará Margaret Mitchell en HuggingFace?' print(get_answer(context, question)) # ayudar a que los algoritmos de ia sean más justos ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Narrativa](https://www.narrativa.com/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
amank22/hi_ud_hi_ewt
amank22
2021-09-03T09:43:35Z
4
0
spacy
[ "spacy", "token-classification", "hi", "model-index", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - spacy - token-classification language: - hi model-index: - name: hi_ud_hi_ewt results: - task: name: POS type: token-classification metrics: - name: POS Accuracy type: accuracy value: 0.9539693129 - task: name: SENTER type: token-classification metrics: - name: SENTER Precision type: precision value: 0.9902617164 - name: SENTER Recall type: recall value: 0.9807112719 - name: SENTER F Score type: f_score value: 0.9854633555 - task: name: UNLABELED_DEPENDENCIES type: token-classification metrics: - name: Unlabeled Dependencies Accuracy type: accuracy value: 0.9198922358 - task: name: LABELED_DEPENDENCIES type: token-classification metrics: - name: Labeled Dependencies Accuracy type: accuracy value: 0.9198922358 ---
tau/splinter-large-qass
tau
2021-09-03T08:47:23Z
7
0
transformers
[ "transformers", "pytorch", "splinter", "question-answering", "SplinterModel", "en", "arxiv:2108.05857", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en tags: - splinter - SplinterModel license: apache-2.0 --- # Splinter large model, (with pretrained QASS-layer weights) Splinter-large is the pretrained model discussed in the paper [Few-Shot Question Answering by Pretraining Span Selection](https://aclanthology.org/2021.acl-long.239/) (at ACL 2021). Its original repository can be found [here](https://github.com/oriram/splinter). The model is case-sensitive. Note (1): This model **does** contain the pretrained weights for the QASS layer (see paper for details). For the model **without** those weights, see [tau/splinter-large](https://huggingface.co/tau/splinter-large). Note (2): Splinter-large was trained after the paper was released, so the results are not reported. However, this model outperforms the base model by large margins. For example, on SQuAD, the model is able to reach 80% F1 given only 128 examples, whereas the base model obtains only ~73%). See the results for Splinter-large in the Appendix of [this paper](https://arxiv.org/pdf/2108.05857.pdf). ## Model description Splinter is a model that is pretrained in a self-supervised fashion for few-shot question answering. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Recurring Span Selection (RSS) objective, which emulates the span selection process involved in extractive question answering. Given a text, clusters of recurring spans (n-grams that appear more than once in the text) are first identified. For each such cluster, all of its instances but one are replaced with a special `[QUESTION]` token, and the model should select the correct (i.e., unmasked) span for each masked one. The model also defines the Question-Aware Span selection (QASS) layer, which selects spans conditioned on a specific question (in order to perform multiple predictions). ## Intended uses & limitations The prime use for this model is few-shot extractive QA. ## Pretraining The model was pretrained on a v3-32 TPU for 2.4M steps. The training data is based on **Wikipedia** and **BookCorpus**. See the paper for more details. ### BibTeX entry and citation info ```bibtex @inproceedings{ram-etal-2021-shot, title = "Few-Shot Question Answering by Pretraining Span Selection", author = "Ram, Ori and Kirstain, Yuval and Berant, Jonathan and Globerson, Amir and Levy, Omer", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.239", doi = "10.18653/v1/2021.acl-long.239", pages = "3066--3079", } ```
tau/splinter-base-qass
tau
2021-09-03T08:47:00Z
2,111
1
transformers
[ "transformers", "pytorch", "splinter", "question-answering", "SplinterModel", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en tags: - splinter - SplinterModel license: apache-2.0 --- # Splinter base model (with pretrained QASS-layer weights) Splinter-base is the pretrained model discussed in the paper [Few-Shot Question Answering by Pretraining Span Selection](https://aclanthology.org/2021.acl-long.239/) (at ACL 2021). Its original repository can be found [here](https://github.com/oriram/splinter). The model is case-sensitive. Note: This model **does** contain the pretrained weights for the QASS layer (see paper for details). For the model **without** those weights, see [tau/splinter-base](https://huggingface.co/tau/splinter-base). ## Model description Splinter is a model that is pretrained in a self-supervised fashion for few-shot question answering. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Recurring Span Selection (RSS) objective, which emulates the span selection process involved in extractive question answering. Given a text, clusters of recurring spans (n-grams that appear more than once in the text) are first identified. For each such cluster, all of its instances but one are replaced with a special `[QUESTION]` token, and the model should select the correct (i.e., unmasked) span for each masked one. The model also defines the Question-Aware Span selection (QASS) layer, which selects spans conditioned on a specific question (in order to perform multiple predictions). ## Intended uses & limitations The prime use for this model is few-shot extractive QA. ## Pretraining The model was pretrained on a v3-8 TPU for 2.4M steps. The training data is based on **Wikipedia** and **BookCorpus**. See the paper for more details. ### BibTeX entry and citation info ```bibtex @inproceedings{ram-etal-2021-shot, title = "Few-Shot Question Answering by Pretraining Span Selection", author = "Ram, Ori and Kirstain, Yuval and Berant, Jonathan and Globerson, Amir and Levy, Omer", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.239", doi = "10.18653/v1/2021.acl-long.239", pages = "3066--3079", } ```
xhyi/PT_GPTNEO1300_Delish_v6
xhyi
2021-09-02T22:29:48Z
3
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# Delish v6 (GPT-Neo 1.3B) This model is from the DelishBot project.
superb/wav2vec2-base-superb-ic
superb
2021-09-02T22:03:59Z
674
0
transformers
[ "transformers", "pytorch", "wav2vec2", "audio-classification", "speech", "audio", "en", "dataset:superb", "arxiv:2105.01051", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-02T23:29:05Z
--- language: en datasets: - superb tags: - speech - audio - wav2vec2 license: apache-2.0 --- # Wav2Vec2-Base for Intent Classification ## Model description This is a ported version of [S3PRL's Wav2Vec2 for the SUPERB Intent Classification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/fluent_commands). The base model is [wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base), which is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051) ## Task and dataset description Intent Classification (IC) classifies utterances into predefined classes to determine the intent of speakers. SUPERB uses the [Fluent Speech Commands](https://fluent.ai/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research/) dataset, where each utterance is tagged with three intent labels: **action**, **object**, and **location**. For the original model's training and evaluation instructions refer to the [S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands). ## Usage examples You can use the model directly like so: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForSequenceClassification, Wav2Vec2FeatureExtractor def map_to_array(example): speech, _ = librosa.load(example["file"], sr=16000, mono=True) example["speech"] = speech return example # load a demo dataset and read audio files dataset = load_dataset("anton-l/superb_demo", "ic", split="test") dataset = dataset.map(map_to_array) model = Wav2Vec2ForSequenceClassification.from_pretrained("superb/wav2vec2-base-superb-ic") feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/wav2vec2-base-superb-ic") # compute attention masks and normalize the waveform if needed inputs = feature_extractor(dataset[:4]["speech"], sampling_rate=16000, padding=True, return_tensors="pt") logits = model(**inputs).logits action_ids = torch.argmax(logits[:, :6], dim=-1).tolist() action_labels = [model.config.id2label[_id] for _id in action_ids] object_ids = torch.argmax(logits[:, 6:20], dim=-1).tolist() object_labels = [model.config.id2label[_id + 6] for _id in object_ids] location_ids = torch.argmax(logits[:, 20:24], dim=-1).tolist() location_labels = [model.config.id2label[_id + 20] for _id in location_ids] ``` ## Eval results The evaluation metric is accuracy. | | **s3prl** | **transformers** | |--------|-----------|------------------| |**test**| `0.9235` | `N/A` | ### BibTeX entry and citation info ```bibtex @article{yang2021superb, title={SUPERB: Speech processing Universal PERformance Benchmark}, author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others}, journal={arXiv preprint arXiv:2105.01051}, year={2021} } ```
huggingartists/lil-nas-x
huggingartists
2021-09-02T20:06:24Z
8
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/lil-nas-x", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/lil-nas-x tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/f50e1ac333da1f744f98eec38e44dd29.640x640x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Lil Nas X</div> <a href="https://genius.com/artists/lil-nas-x"> <div style="text-align: center; font-size: 14px;">@lil-nas-x</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Lil Nas X. Dataset is available [here](https://huggingface.co/datasets/huggingartists/lil-nas-x). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/lil-nas-x") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/n5s2tj7p/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Lil Nas X's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/334lnf7p) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/334lnf7p/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/lil-nas-x') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/lil-nas-x") model = AutoModelWithLMHead.from_pretrained("huggingartists/lil-nas-x") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/dua-lipa
huggingartists
2021-09-02T19:51:50Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/dua-lipa", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/dua-lipa tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/dd37b530cf20f2ce699f91e02a476a8a.847x847x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Dua Lipa</div> <a href="https://genius.com/artists/dua-lipa"> <div style="text-align: center; font-size: 14px;">@dua-lipa</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Dua Lipa. Dataset is available [here](https://huggingface.co/datasets/huggingartists/dua-lipa). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/dua-lipa") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2wxz1liw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Dua Lipa's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3uj930yj) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3uj930yj/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/dua-lipa') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/dua-lipa") model = AutoModelWithLMHead.from_pretrained("huggingartists/dua-lipa") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
vymn/vymn
vymn
2021-09-02T14:03:29Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
<pre> ---------------------------------------- <span>developing brains!!</span> ---------------------------------------- _---~~(~~-_. _{ ) ) , ) -~~- ( ,-' )_ ( `-,_..`., )-- '_,) ( ` _) ( -~( -_ `, } (_- _ ~_-~~~~`, ,' ) `~ -^( __;-,((())) ~~~~ {_ -_(()) `\ } { } vymn mohvmd svlih. </pre> I'm android frontend developer and AI researcher, I work with [flutter](https://flutter.dev/) framework, [kotlin](https://kotlinlang.org/), [java](https://www.java.com/), [python](https://python.org/), [php](https://www.php.net/),... . from time to time i do some backend stuff.. can also Work with some AI frameworks and platforms. <!-- ### Check out my social medias: --> <!-- - 💬 [reddit](https://www.reddit.com/user/vymn2862) - 🔗 [LinkedIn](https://www.linkedin.com/in/vymn-mohvmd-b38829206/) --> <!-- ![zendy199x's github stats](https://github-readme-stats.vercel.app/api?username=vymn&theme=merko&show_icons=true) --> <div><img align="center" src="https://github-readme-stats.vercel.app/api/top-langs/?username=vymn&layout=compact&hide=html" alt="vymn" /></div> <br /> <br /> <div><img align="center" src="https://github-readme-stats.vercel.app/api?username=vymn&show_icons=true" alt="vymn" /></div>
mnaylor/psychbert-cased
mnaylor
2021-09-02T13:57:46Z
14
7
transformers
[ "transformers", "jax", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# PsychBERT This domain adapted language model is pretrained from the `bert-base-cased` checkpoint on masked language modeling, using a dataset of ~40,000 PubMed papers in the domain of psychology, psychiatry, mental health, and behavioral health; as well as a dastaset of roughly 200,000 social media conversations about mental health. This work is submitted as an entry for BIBM 2021. **Note**: the token-prediction widget on this page does not work with Flax models. In order to use the model, please pull it into a Python session as follows: ``` from transformers import FlaxAutoModelForMaskedLM, AutoModelForMaskedLM # load as a flax model flax_lm = FlaxAutoModelForMaskedLM.from_pretrained('mnaylor/psychbert-cased') # load as a pytorch model # requires flax to be installed in your environment pytorch_lm = AutoModelForMaskedLM.from_pretrained('mnaylor/psychbert-cased', from_flax=True) ``` Authors: Vedant Vajre, Mitch Naylor, Uday Kamath, Amarda Shehu
SaulLu/test-add-new-model
SaulLu
2021-09-02T12:47:36Z
6
0
transformers
[ "transformers", "pytorch", "bart", "feature-extraction", "arxiv:2107.06955", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:04Z
# HTLM Pretraining Dataset: 23TB of simplified HTML extracted from common crawl dumps Paper: [HTLM: Hyper-Text Pre-Training and Prompting of Language Models](https://arxiv.org/abs/2107.06955) Authors: Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Abstract We introduce HTLM, a hyper-text language model trained on a large-scale web crawl. Modeling hyper-text has a number of advantages: (1) it is easily gathered at scale, (2) it provides rich document-level and end-task-adjacent supervision (e.g. class and id attributes often encode document category information), and (3) it allows for new structured prompting that follows the established semantics of HTML (e.g. to do zero-shot summarization by infilling title tags for a webpage that contains the input text). We show that pretraining with a BART-style denoising loss directly on simplified HTML provides highly effective transfer for a wide range of end tasks and supervision levels. HTLM matches or exceeds the performance of comparably sized text-only LMs for zero-shot prompting and fine-tuning for classification benchmarks, while also setting new state-of-the-art performance levels for zero-shot summarization. We also find that hyper-text prompts provide more value to HTLM, in terms of data efficiency, than plain text prompts do for existing LMs, and that HTLM is highly effective at auto-prompting itself, by simply generating the most likely hyper-text formatting for any available training data. We will release all code and models to support future HTLM research. ## Usage For the moment you can use it as is to do a classic Mask Filling task (see snippet bellow) or fine-tune it on a downstream task. ``` from transformers import BartTokenizer, BartForConditionalGeneration TXT = "My friends are <mask> but they eat too many carbs." model_name = "SaulLu/test-add-new-model" tokenizer = BartTokenizer.from_pretrained(model_name) model = BartForConditionalGeneration.from_pretrained(model_name) input_ids = tokenizer([TXT], return_tensors='pt')['input_ids'] logits = model(input_ids).logits masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item() probs = logits[0, masked_index].softmax(dim=0) values, predictions = probs.topk(5) tokenizer.decode(predictions).split() ```
flax-community/gpt2-small-indonesian
flax-community
2021-09-02T12:26:52Z
168
5
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "gpt2", "text-generation", "id", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: id widget: - text: "Sewindu sudah kita tak berjumpa, rinduku padamu sudah tak terkira." --- # GPT2-small-indonesian This is a pretrained model on Indonesian language using a causal language modeling (CLM) objective, which was first introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team. The demo can be found [here](https://huggingface.co/spaces/flax-community/gpt2-indonesian). ## How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='flax-community/gpt2-small-indonesian') >>> set_seed(42) >>> generator("Sewindu sudah kita tak berjumpa,", max_length=30, num_return_sequences=5) [{'generated_text': 'Sewindu sudah kita tak berjumpa, dua dekade lalu, saya hanya bertemu sekali. Entah mengapa, saya lebih nyaman berbicara dalam bahasa Indonesia, bahasa Indonesia'}, {'generated_text': 'Sewindu sudah kita tak berjumpa, tapi dalam dua hari ini, kita bisa saja bertemu.”\ “Kau tau, bagaimana dulu kita bertemu?” aku'}, {'generated_text': 'Sewindu sudah kita tak berjumpa, banyak kisah yang tersimpan. Tak mudah tuk kembali ke pelukan, di mana kini kita berada, sebuah tempat yang jauh'}, {'generated_text': 'Sewindu sudah kita tak berjumpa, sejak aku lulus kampus di Bandung, aku sempat mencari kabar tentangmu. Ah, masih ada tempat di hatiku,'}, {'generated_text': 'Sewindu sudah kita tak berjumpa, tapi Tuhan masih saja menyukarkan doa kita masing-masing.\ Tuhan akan memberi lebih dari apa yang kita'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('flax-community/gpt2-small-indonesian') model = GPT2Model.from_pretrained('flax-community/gpt2-small-indonesian') text = "Ubah dengan teks apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('flax-community/gpt2-small-indonesian') model = TFGPT2Model.from_pretrained('flax-community/gpt2-small-indonesian') text = "Ubah dengan teks apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Limitations and bias The training data used for this model are Indonesian websites of [OSCAR](https://oscar-corpus.com/), [mc4](https://huggingface.co/datasets/mc4) and [Wikipedia](https://huggingface.co/datasets/wikipedia). The datasets contain a lot of unfiltered content from the internet, which is far from neutral. While we have done some filtering on the dataset (see the **Training data** section), the filtering is by no means a thorough mitigation of biased content that is eventually used by the training data. These biases might also affect models that are fine-tuned using this model. As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we > do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry > out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, > race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with > similar levels of caution around use cases that are sensitive to biases around human attributes. We have done a basic bias analysis that you can find in this [notebook](https://huggingface.co/flax-community/gpt2-small-indonesian/blob/main/bias_analysis/gpt2_medium_indonesian_bias_analysis.ipynb), performed on [Indonesian GPT2 medium](https://huggingface.co/flax-community/gpt2-medium-indonesian), based on the bias analysis for [Polish GPT2](https://huggingface.co/flax-community/papuGaPT2) with modifications. ### Gender bias We generated 50 texts starting with prompts "She/He works as". After doing some preprocessing (lowercase and stopwords removal) we obtain texts that are used to generate word clouds of female/male professions. The most salient terms for male professions are: driver, sopir (driver), ojek, tukang, online. ![gender bias - male](https://huggingface.co/flax-community/gpt2-small-indonesian/raw/main/bias_analysis/wordcloud_male.png) The most salient terms for female professions are: pegawai (employee), konsultan (consultant), asisten (assistant). ![gender bias - female](https://huggingface.co/flax-community/gpt2-small-indonesian/raw/main/bias_analysis/wordcloud_female.png) ### Ethnicity bias We generated 1,200 texts to assess bias across ethnicity and gender vectors. We will create prompts with the following scheme: * Person - we will assess 5 ethnicities: Sunda, Batak, Minahasa, Dayak, Asmat, Neutral (no ethnicity) * Topic - we will use 5 different topics: * random act: *entered home* * said: *said* * works as: *works as* * intent: *let [person] ...* * define: *is* Sample of generated prompt: "seorang perempuan sunda masuk ke rumah..." (a Sundanese woman enters the house...) We used a [model](https://huggingface.co/Hate-speech-CNERG/dehatebert-mono-indonesian) trained on Indonesian hate speech corpus ([dataset 1](https://github.com/okkyibrohim/id-multi-label-hate-speech-and-abusive-language-detection), [dataset 2](https://github.com/ialfina/id-hatespeech-detection)) to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the ethnicity and gender from the generated text before running the hate speech detector. The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some ethnicities score higher than the neutral baseline. ![bias analysis - ethnicities](https://huggingface.co/flax-community/gpt2-small-indonesian/raw/main/bias_analysis/bias_ethnicity.png) ### Religion bias With the same methodology above, we generated 1,400 texts to assess bias across religion and gender vectors. We will assess 6 religions: Islam, Protestan (Protestant), Katolik (Catholic), Buddha (Buddhism), Hindu (Hinduism), and Khonghucu (Confucianism) with Neutral (no religion) as a baseline. The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some religions score higher than the neutral baseline. ![bias analysis - ethnicities](https://huggingface.co/flax-community/gpt2-small-indonesian/raw/main/bias_analysis/bias_religion.png) ## Training data The model was trained on a combined dataset of [OSCAR](https://oscar-corpus.com/), [mc4](https://huggingface.co/datasets/mc4) and Wikipedia for the Indonesian language. We have filtered and reduced the mc4 dataset so that we end up with 29 GB of data in total. The mc4 dataset was cleaned using [this filtering script](https://github.com/Wikidepia/indonesian_datasets/blob/master/dump/mc4/cleanup.py) and we also only included links that have been cited by the Indonesian Wikipedia. ## Training procedure The model was trained on a TPUv3-8 VM provided by the Google Cloud team. The training duration was `4d 14h 50m 47s`. ### Evaluation results The model achieves the following results without any fine-tuning (zero-shot): | dataset | train loss | eval loss | eval perplexity | | ---------- | ---------- | -------------- | ---------- | | ID OSCAR+mc4+wikipedia (29GB) | 3.046 | 2.926 | 18.66 | ### Tracking The training process was tracked in [TensorBoard](https://huggingface.co/flax-community/gpt2-small-indonesian/tensorboard) and [Weights and Biases](https://wandb.ai/wandb/hf-flax-gpt2-indonesian?workspace=user-cahya). ## Team members - Akmal ([@Wikidepia](https://huggingface.co/Wikidepia)) - alvinwatner ([@alvinwatner](https://huggingface.co/alvinwatner)) - Cahya Wirawan ([@cahya](https://huggingface.co/cahya)) - Galuh Sahid ([@Galuh](https://huggingface.co/Galuh)) - Muhammad Agung Hambali ([@AyameRushia](https://huggingface.co/AyameRushia)) - Muhammad Fhadli ([@muhammadfhadli](https://huggingface.co/muhammadfhadli)) - Samsul Rahmadani ([@munggok](https://huggingface.co/munggok)) ## Future work We would like to pre-train further the models with larger and cleaner datasets and fine-tune it to specific domains if we can get the necessary hardware resources.
Wikidepia/IndoT5-large
Wikidepia
2021-09-02T11:57:48Z
6
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "id", "dataset:allenai/c4", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - id datasets: - allenai/c4 --- **NOTE** : This model might be broken :/ # Indonesian T5 Large T5 (Text-to-Text Transfer Transformer) model pretrained on Indonesian mC4 with [extra filtering](https://github.com/Wikidepia/indonesian_datasets/tree/master/dump/mc4). This model is pre-trained only and needs to be fine-tuned to be used for specific tasks. ## Pretraining Details Trained for 500K steps following [`google/t5-v1_1-large`](https://huggingface.co/google/t5-v1_1-large). ## Model Performance TBD ## Limitations and bias This model also has the problem of biased (unethical, harmful, biased) output results due to the bias of the content of the training data, which is associated with the language model using a large-scale corpus. There is potential. Assuming that this problem may occur, please be careful to use it only for applications that do not cause damage. ## Acknowledgement Thanks to Tensorflow Research Cloud for providing TPU v3-8s.
DataikuNLP/paraphrase-multilingual-MiniLM-L12-v2
DataikuNLP
2021-09-02T08:31:10Z
393
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "arxiv:1908.10084", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:04Z
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # DataikuNLP/paraphrase-multilingual-MiniLM-L12-v2 **This model is a copy of [this model repository](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) from sentence-transformers at the specific commit `d66eff4d8a8598f264f166af8db67f7797164651`.** This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2') model = AutoModel.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
DataikuNLP/TinyBERT_General_4L_312D
DataikuNLP
2021-09-02T08:09:47Z
96
1
transformers
[ "transformers", "pytorch", "jax", "bert", "arxiv:1909.10351", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
TinyBERT: Distilling BERT for Natural Language Understanding ======== **This model is a copy of [this model repository](https://huggingface.co/huawei-noah/TinyBERT_General_4L_312D) from Huawei Noah at the specific commit `34707a33cd59a94ecde241ac209bf35103691b43`.** TinyBERT is 7.5x smaller and 9.4x faster on inference than BERT-base and achieves competitive performances in the tasks of natural language understanding. It performs a novel transformer distillation at both the pre-training and task-specific learning stages. In general distillation, we use the original BERT-base without fine-tuning as the teacher and a large-scale text corpus as the learning data. By performing the Transformer distillation on the text from general domain, we obtain a general TinyBERT which provides a good initialization for the task-specific distillation. We here provide the general TinyBERT for your tasks at hand. For more details about the techniques of TinyBERT, refer to our paper: [TinyBERT: Distilling BERT for Natural Language Understanding](https://arxiv.org/abs/1909.10351) Citation ======== If you find TinyBERT useful in your research, please cite the following paper: ``` @article{jiao2019tinybert, title={Tinybert: Distilling bert for natural language understanding}, author={Jiao, Xiaoqi and Yin, Yichun and Shang, Lifeng and Jiang, Xin and Chen, Xiao and Li, Linlin and Wang, Fang and Liu, Qun}, journal={arXiv preprint arXiv:1909.10351}, year={2019} } ```
DataikuNLP/paraphrase-MiniLM-L6-v2
DataikuNLP
2021-09-02T08:05:59Z
57
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "arxiv:1908.10084", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:04Z
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # DataikuNLP/paraphrase-MiniLM-L6-v2 **This model is a copy of [this model repository](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L6-v2/) from sentence-transformers at the specific commit `c4dfcde8a3e3e17e85cd4f0ec1925a266187f48e`.** This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-MiniLM-L6-v2) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
xhyi/distilLED4_09_01_2021_v6_2
xhyi
2021-09-02T06:28:25Z
4
0
transformers
[ "transformers", "pytorch", "led", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
Step Training Loss Validation Loss Rouge2 Precision Rouge2 Recall Rouge2 Fmeasure 100 3.049500 2.605496 0.172300 0.186900 0.151200 200 3.019400 2.567277 0.165100 0.189400 0.145000 300 3.014400 2.538830 0.157000 0.179200 0.134200 400 2.867200 2.490068 0.163600 0.177100 0.136200 500 2.723700 2.465870 0.168400 0.195700 0.152300 600 2.925400 2.452575 0.169500 0.210100 0.159400 700 2.878900 2.440204 0.173400 0.198000 0.155800 800 3.156500 2.423908 0.172900 0.196300 0.152800 + 440 steps before total = 1240 steps
xhyi/distilLED3_08_31_2021_v5
xhyi
2021-09-02T01:44:58Z
5
0
transformers
[ "transformers", "pytorch", "led", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
\nTraining Loss Validation Loss Rouge2 Precision Rouge2 Recall Rouge2 Fmeasure 2.880900 2.715085 0.121400 0.142300 0.117100 +200 steps total = 440 steps tokenization: max article: 8192 max abstract: 512
xhyi/distilLED1_08_31_2021_v3
xhyi
2021-09-02T01:41:23Z
4
0
transformers
[ "transformers", "pytorch", "led", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
Step Training Loss Validation Loss Rouge2 Precision Rouge2 Recall Rouge2 Fmeasure 240 2.513600 3.049892 0.082800 0.102600 0.085700 240 steps
DataikuNLP/average_word_embeddings_glove.6B.300d
DataikuNLP
2021-09-01T15:57:24Z
0
1
sentence-transformers
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "arxiv:1908.10084", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:04Z
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity --- # average_word_embeddings_glove.6B.300d **This model is a copy of [this model repository](https://huggingface.co/sentence-transformers/average_word_embeddings_glove.6B.300d) from sentence-transformers at the specific commit `5d2b7d1c127036ae98b9d487eca4d48744edc709`.** This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/average_word_embeddings_glove.6B.300d') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/average_word_embeddings_glove.6B.300d) ## Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(400001, 300) ) (1): Pooling({'word_embedding_dimension': 300, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
espnet/byan_librispeech_asr_train_asr_conformer_raw_bpe_batch_bins30000000_ac-truncated-68a97b
espnet
2021-09-01T15:54:31Z
0
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:librispeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - librispeech license: cc-by-4.0 --- ## ESPnet2 ASR pretrained model ### `byan/librispeech_asr_train_asr_conformer_raw_bpe_batch_bins30000000_accum_grad3_optim_conflr0.001_sp` ♻️ Imported from https://huggingface.co/ This model was trained by byan using librispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/Yushi_Ueda_ksponspeech_asr_train_asr_conformer8_n_fft512_hop_length256-truncated-eb42e5
espnet
2021-09-01T15:53:00Z
3
1
espnet
[ "espnet", "audio", "automatic-speech-recognition", "kr", "dataset:ksponspeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: kr datasets: - ksponspeech license: cc-by-4.0 --- ## ESPnet2 ASR pretrained model ### `Yushi Ueda/ksponspeech_asr_train_asr_conformer8_n_fft512_hop_length256_raw_kr_bpe2309_valid.acc.best` ♻️ Imported from https://zenodo.org/record/5154341/ This model was trained by Yushi Ueda using ksponspeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/jv_openslr35
espnet
2021-09-01T15:49:59Z
0
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "jv", "dataset:jv_openslr35", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: jv datasets: - jv_openslr35 license: cc-by-4.0 --- ## ESPnet2 ASR pretrained model ### `jv_openslr35` ♻️ Imported from https://zenodo.org/record/5090139/ This model was trained by jv_openslr35 using jv_openslr35/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Thitaree/distilbert-base-uncased-finetuned-squad
Thitaree
2021-09-01T15:33:24Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: - task: name: Question Answering type: question-answering dataset: name: squad type: squad args: plain_text --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
patrickvonplaten/wav2vec2_tiny_random_robust
patrickvonplaten
2021-09-01T14:48:17Z
86
0
transformers
[ "transformers", "pytorch", "wav2vec2", "feature-extraction", "automatic-speech-recognition", "en", "dataset:librispeech_asr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: en datasets: - librispeech_asr tags: - automatic-speech-recognition license: apache-2.0 --- ## Test model To test this model run the following code: ```python from datasets import load_dataset from transformers import Wav2Vec2ForCTC import torchaudio import torch ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") model = Wav2Vec2ForCTC.from_pretrained("patrickvonplaten/wav2vec2_tiny_random_robust") def load_audio(batch): batch["samples"], _ = torchaudio.load(batch["file"]) return batch ds = ds.map(load_audio) input_values = torch.nn.utils.rnn.pad_sequence([torch.tensor(x[0]) for x in ds["samples"][:10]], batch_first=True) # forward logits = model(input_values).logits pred_ids = torch.argmax(logits, dim=-1) # dummy loss dummy_labels = pred_ids.clone() dummy_labels[dummy_labels == model.config.pad_token_id] = 1 # can't have CTC blank token in label dummy_labels = dummy_labels[:, -(dummy_labels.shape[1] // 4):] # make sure labels are shorter to avoid "inf" loss (can still happen though...) loss = model(input_values, labels=dummy_labels).loss ```
DataikuNLP/paraphrase-albert-small-v2
DataikuNLP
2021-09-01T13:30:27Z
19
2
sentence-transformers
[ "sentence-transformers", "pytorch", "albert", "feature-extraction", "sentence-similarity", "transformers", "arxiv:1908.10084", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:04Z
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # DataikuNLP/paraphrase-albert-small-v2 **This model is a copy of [this model repository](https://huggingface.co/sentence-transformers/paraphrase-albert-small-v2/) from sentence-transformers at the specific commit `1eb1996223dd90a4c25be2fc52f6f336419a0d52`.** This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-albert-small-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-albert-small-v2') model = AutoModel.from_pretrained('sentence-transformers/paraphrase-albert-small-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-albert-small-v2) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 100, 'do_lower_case': False}) with Transformer model: AlbertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
recobo/chemical-bert-uncased-squad2
recobo
2021-09-01T08:44:18Z
6
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "recobo/chemical-bert-uncased-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between pytorch and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ```
eugenesiow/awsrn-bam
eugenesiow
2021-09-01T08:02:58Z
1,599
1
transformers
[ "transformers", "AWSRN", "super-image", "image-super-resolution", "dataset:eugenesiow/Div2k", "dataset:eugenesiow/Set5", "dataset:eugenesiow/Set14", "dataset:eugenesiow/BSD100", "dataset:eugenesiow/Urban100", "arxiv:1904.02358", "arxiv:2104.07566", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - super-image - image-super-resolution datasets: - eugenesiow/Div2k - eugenesiow/Set5 - eugenesiow/Set14 - eugenesiow/BSD100 - eugenesiow/Urban100 metrics: - pnsr - ssim --- # Lightweight Image Super-Resolution with Adaptive Weighted Learning Network (AWSRN) AWSRN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Lightweight Image Super-Resolution with Adaptive Weighted Learning Network](https://arxiv.org/abs/1904.02358) by Wang et al. (2019) and first released in [this repository](https://github.com/ChaofWang/AWSRN). The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling. ![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4](images/awsrn_4_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4") ## Model description Deep learning has been successfully applied to the single-image super-resolution (SISR) task with great performance in recent years. However, most convolutional neural network based SR models require heavy computation, which limit their real-world applications. In this work, a lightweight SR network, named Adaptive Weighted Super-Resolution Network (AWSRN), is proposed for SISR to address this issue. A novel local fusion block (LFB) is designed in AWSRN for efficient residual learning, which consists of stacked adaptive weighted residual units (AWRU) and a local residual fusion unit (LRFU). Moreover, an adaptive weighted multi-scale (AWMS) module is proposed to make full use of features in reconstruction layer. AWMS consists of several different scale convolutions, and the redundancy scale branch can be removed according to the contribution of adaptive weights in AWMS for lightweight network. The experimental results on the commonly used datasets show that the proposed lightweight AWSRN achieves superior performance on ×2, ×3, ×4, and ×8 scale factors to state-of-the-art methods with similar parameters and computational overhead. This model also applies the balanced attention (BAM) method invented by [Wang et al. (2021)](https://arxiv.org/abs/2104.07566) to further improve the results. ## Intended uses & limitations You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset. ### How to use The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library: ```bash pip install super-image ``` Here is how to use a pre-trained model to upscale your image: ```python from super_image import AwsrnModel, ImageLoader from PIL import Image import requests url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg' image = Image.open(requests.get(url, stream=True).raw) model = AwsrnModel.from_pretrained('eugenesiow/awsrn-bam', scale=2) # scale 2, 3 and 4 models available inputs = ImageLoader.load_image(image) preds = model(inputs) ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png` ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab") ## Training data The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900). ## Training procedure ### Preprocessing We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566). Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times. During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches. Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image. We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data: ```bash pip install datasets ``` The following code gets the data and preprocesses/augments the data. ```python from datasets import load_dataset from super_image.data import EvalDataset, TrainDataset, augment_five_crop augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\ .map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader ``` ### Pretraining The model was trained on GPU. The training code is provided below: ```python from super_image import Trainer, TrainingArguments, AwsrnModel, AwsrnConfig training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=1000, # total number of training epochs ) config = AwsrnConfig( scale=4, # train a model to upscale 4x bam=True, # apply balanced attention to the network ) model = AwsrnModel(config) trainer = Trainer( model=model, # the instantiated model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=eval_dataset # evaluation dataset ) trainer.train() ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab") ## Evaluation results The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm). Evaluation datasets include: - Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5) - Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14) - BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100) - Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100) The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline. |Dataset |Scale |Bicubic |awsrn-bam | |--- |--- |--- |--- | |Set5 |2x |33.64/0.9292 |**37.99/0.9606** | |Set5 |3x |30.39/0.8678 |**35.05/0.9403** | |Set5 |4x |28.42/0.8101 |**32.13/0.8947** | |Set14 |2x |30.22/0.8683 |**33.66/0.918** | |Set14 |3x |27.53/0.7737 |**31.01/0.8581** | |Set14 |4x |25.99/0.7023 |**28.75/0.7851** | |BSD100 |2x |29.55/0.8425 |**33.76/0.9253** | |BSD100 |3x |27.20/0.7382 |**29.63/0.8188** | |BSD100 |4x |25.96/0.6672 |**28.51/0.7647** | |Urban100 |2x |26.66/0.8408 |**31.95/0.9265** | |Urban100 |3x | |**29.14/0.871** | |Urban100 |4x |23.14/0.6573 |**26.03/0.7838** | ![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2](images/awsrn_2_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2") You can find a notebook to easily run evaluation on pretrained models below: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab") ## BibTeX entry and citation info ```bibtex @misc{wang2021bam, title={BAM: A Lightweight and Efficient Balanced Attention Mechanism for Single Image Super Resolution}, author={Fanyi Wang and Haotian Hu and Cheng Shen}, year={2021}, eprint={2104.07566}, archivePrefix={arXiv}, primaryClass={eess.IV} } ``` ```bibtex @article{wang2019lightweight, title={Lightweight Image Super-Resolution with Adaptive Weighted Learning Network}, author={Wang, Chaofeng and Li, Zhen and Shi, Jun}, journal={arXiv preprint arXiv:1904.02358}, year={2019 } ```
bayartsogt/mlub-bert-large-uncased-tr5do30ep25
bayartsogt
2021-08-31T23:55:23Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
|fold|accuracy| |-|-| | fold 0 | 0.974197247706422 | | fold 1 | 0.9678899082568807 | | fold 2 | 0.9724770642201835 | | fold 3 | 0.9701834862385321 | | fold 4 | 0.9736238532110092 | | OOF Acc | 0.9716743119266055 | ``` synset_word ав 1.000000 ам 0.931507 баг 0.980000 байр 0.943548 бараа 0.964789 гар 0.950210 гол 0.938731 гүн 0.912088 зах 0.946667 зуу 0.995798 зүрх 0.918367 мөнгө 0.973333 нуруу 0.968750 нүд 1.000000 нүүр 0.987805 салбар 0.963636 сар 0.996627 сум 0.816667 тэрэг 0.822581 түүх 0.980237 төр 0.998428 хий 0.993077 хураа 0.858268 хэлбэр 0.727273 хөндий 1.000000 шат 1.000000 эм 1.000000 эрүүл 1.000000 dtype: float64 ```
elisno/is_ud_is_pud
elisno
2021-08-31T21:56:16Z
4
0
spacy
[ "spacy", "token-classification", "is", "model-index", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - spacy - token-classification language: - is model-index: - name: is_ud_is_pud results: - task: name: POS type: token-classification metrics: - name: POS Accuracy type: accuracy value: 0.7356746765 - task: name: SENTER type: token-classification metrics: - name: SENTER Precision type: precision value: 0.8611111111 - name: SENTER Recall type: recall value: 0.93 - name: SENTER F Score type: f_score value: 0.8942307692 - task: name: UNLABELED_DEPENDENCIES type: token-classification metrics: - name: Unlabeled Dependencies Accuracy type: accuracy value: 0.7336065574 - task: name: LABELED_DEPENDENCIES type: token-classification metrics: - name: Labeled Dependencies Accuracy type: accuracy value: 0.7336065574 ---
madlag/bert-base-uncased-squadv1-x2.44-f87.7-d26-hybrid-filled-v1
madlag
2021-08-31T12:00:08Z
74
0
transformers
[ "transformers", "pytorch", "tf", "bert", "question-answering", "en", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en thumbnail: license: mit tags: - question-answering - - datasets: - squad metrics: - squad widget: - text: "Where is the Eiffel Tower located?" context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower." - text: "Who is Frederic Chopin?" context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano." --- ## BERT-base uncased model fine-tuned on SQuAD v1 This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 26.0%** of the original weights. The model contains **42.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method). With a simple resizing of the linear matrices it ran **2.44x as fast as the original model** on the evaluation. This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x2.44-f87.7-d26-hybrid-filled-v1/raw/main/model_card/density_info.js" id="d5d1b3e9-73f5-4cfc-8e33-3745054bc7d0"></script></div> In terms of accuracy, its **F1 is 87.71**, compared with 88.5 for the original model, a **F1 drop of 0.79**. ## Fine-Pruning details This model was fine-tuned from the HuggingFace [model](https://huggingface.co//home/lagunas/devel/hf/nn_pruning/nn_pruning/analysis/tmp_finetune) checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [csarron/bert-base-uncased-squad-v1](https://huggingface.co/csarron/bert-base-uncased-squad-v1) This model is case-insensitive: it does not make a difference between english and English. A side-effect of the block pruning is that some of the attention heads are completely removed: 80 heads were removed on a total of 144 (55.6%). Here is a detailed view on how the remaining heads are distributed in the network after pruning. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x2.44-f87.7-d26-hybrid-filled-v1/raw/main/model_card/pruning_info.js" id="ccef8803-4310-4434-997e-c9dc158cabdb"></script></div> ## Details of the SQuAD1.1 dataset | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.8.5` - Machine specs: ```CPU: Intel(R) Core(TM) i7-6700K CPU Memory: 64 GiB GPUs: 1 GeForce GTX 3090, with 24GiB memory GPU driver: 455.23.05, CUDA: 11.1 ``` ### Results **Pytorch model file size**: `355MB` (original BERT: `420MB`) | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation | | ------ | --------- | --------- | --------- | | **EM** | **80.03** | **80.8** | **-0.77**| | **F1** | **87.71** | **88.5** | **-0.79**| ## Example Usage Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns. `pip install nn_pruning` Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded. ```python from transformers import pipeline from nn_pruning.inference_model_patcher import optimize_model qa_pipeline = pipeline( "question-answering", model="madlag/bert-base-uncased-squadv1-x2.44-f87.7-d26-hybrid-filled-v1", tokenizer="madlag/bert-base-uncased-squadv1-x2.44-f87.7-d26-hybrid-filled-v1" ) print("/home/lagunas/devel/hf/nn_pruning/nn_pruning/analysis/tmp_finetune parameters: 189.0M") print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M") qa_pipeline.model = optimize_model(qa_pipeline.model, "dense") print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M") predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print("Predictions", predictions) ```
SongRb/distilbert-base-uncased-finetuned-ner
SongRb
2021-08-31T10:59:42Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model_index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metric: name: Accuracy type: accuracy value: 0.9850826886110537 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0746 - Precision: 0.9347 - Recall: 0.9426 - F1: 0.9386 - Accuracy: 0.9851 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0832 | 1.0 | 3511 | 0.0701 | 0.9317 | 0.9249 | 0.9283 | 0.9827 | | 0.0384 | 2.0 | 7022 | 0.0701 | 0.9282 | 0.9410 | 0.9346 | 0.9845 | | 0.0222 | 3.0 | 10533 | 0.0746 | 0.9347 | 0.9426 | 0.9386 | 0.9851 | ### Framework versions - Transformers 4.10.0.dev0 - Pytorch 1.8.1 - Datasets 1.11.0 - Tokenizers 0.10.3
madlag/bert-base-uncased-squadv1-x1.84-f88.7-d36-hybrid-filled-v1
madlag
2021-08-31T09:31:46Z
78
0
transformers
[ "transformers", "pytorch", "tf", "bert", "question-answering", "en", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en thumbnail: license: mit tags: - question-answering - - datasets: - squad metrics: - squad widget: - text: "Where is the Eiffel Tower located?" context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower." - text: "Who is Frederic Chopin?" context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano." --- ## BERT-base uncased model fine-tuned on SQuAD v1 This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 36.0%** of the original weights. The model contains **50.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method). With a simple resizing of the linear matrices it ran **1.84x as fast as the dense model** on the evaluation. This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x1.84-f88.7-d36-hybrid-filled-v1/raw/main/model_card/density_info.js" id="3aca15eb-8def-482c-800a-d9f8a6e8cea5"></script></div> In terms of accuracy, its **F1 is 88.72**, compared with 88.5 for the dense version, a **F1 gain of 0.22**. ## Fine-Pruning details This model was fine-tuned from the HuggingFace [model](https://huggingface.co//home/lagunas/devel/hf/nn_pruning/nn_pruning/analysis/tmp_finetune) checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [csarron/bert-base-uncased-squad-v1](https://huggingface.co/csarron/bert-base-uncased-squad-v1) This model is case-insensitive: it does not make a difference between english and English. A side-effect of the block pruning is that some of the attention heads are completely removed: 48 heads were removed on a total of 144 (33.3%). Here is a detailed view on how the remaining heads are distributed in the network after pruning. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x1.84-f88.7-d36-hybrid-filled-v1/raw/main/model_card/pruning_info.js" id="95fe9d1f-98f7-40e1-a28f-b90d0da0f1a8"></script></div> ## Details of the SQuAD1.1 dataset | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.8.5` - Machine specs: ```CPU: Intel(R) Core(TM) i7-6700K CPU Memory: 64 GiB GPUs: 1 GeForce GTX 3090, with 24GiB memory GPU driver: 455.23.05, CUDA: 11.1 ``` ### Results **Pytorch model file size**: `379MB` (original BERT: `420MB`) | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation | | ------ | --------- | --------- | --------- | | **EM** | **81.69** | **80.8** | **+0.89**| | **F1** | **88.72** | **88.5** | **+0.22**| ## Example Usage Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns. `pip install nn_pruning` Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded. ```python from transformers import pipeline from nn_pruning.inference_model_patcher import optimize_model qa_pipeline = pipeline( "question-answering", model="madlag/bert-base-uncased-squadv1-x1.84-f88.7-d36-hybrid-filled-v1", tokenizer="madlag/bert-base-uncased-squadv1-x1.84-f88.7-d36-hybrid-filled-v1" ) print("/home/lagunas/devel/hf/nn_pruning/nn_pruning/analysis/tmp_finetune parameters: 218.0M") print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M") qa_pipeline.model = optimize_model(qa_pipeline.model, "dense") print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M") predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print("Predictions", predictions) ```
UBC-NLP/IndT5
UBC-NLP
2021-08-30T22:03:01Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# IndT5: A Text-to-Text Transformer for 10 Indigenous Languages &nbsp; <img src="https://huggingface.co/UBC-NLP/IndT5/raw/main/IND_langs_large7.png" alt="drawing" width="45%" height="45%" align="right"/> In this work, we introduce IndT5, the first Transformer language model for Indigenous languages. To train IndT5, we build IndCorpu, a new corpus for 10 Indigenous languages and Spanish. &nbsp; # IndT5 We train an Indigenous language model adopting the unified and flexible text-to-text transfer Transformer (T5) approach. T5 treats every text-based language task as a “text-to-text" problem, taking text format as input and producing new text format as output. T5 is essentially an encoder-decoder Transformer, with the encoder and decoder similar in configuration and size to a BERT<sub>Base</sub> but with some architectural modifications. Modifications include applying a normalization layer before a sub-block and adding a pre-norm (i.e., initial input to the sub-block output). # IndCourpus We build IndCorpus, a collection of 10 Indigeous languages and Spanish comprising 1.17GB of text, from both Wikipedia and the Bible. ### Data size and number of sentences in monolingual dataset (collected from Wikipedia and Bible) | **Target Language** | **Wiki Size (MB)** | **Wiki #Sentences** | **Bible Size (MB)** | **Bible #Sentences**| |-------------------|------------------|-------------------|------------------------|-| |Hñähñu | - | - | 1.4 | 7.5K | |Wixarika | - | - | 1.3 | 7.5K| |Nahuatl | 5.8 | 61.1K | 1.5 | 7.5K| |Guarani | 3.7 | 28.2K | 1.3 | 7.5K | |Bribri | - | - | 1.5 | 7.5K | |Rarámuri | - | - | 1.9 | 7.5K | |Quechua | 5.9 | 97.3K | 4.9 | 31.1K | |Aymara | 1.7 | 32.9K | 5 | 30.7K| |Shipibo-Konibo | - | - | 1 | 7.9K | |Asháninka | - | - | 1.4 | 7.8K | |Spanish | 1.13K | 5M | - | - | |Total | 1.15K | 5.22M | 19.8 | 125.3K| # Github More details about our model can be found here: https://github.com/UBC-NLP/IndT5 # BibTex ```bibtex @inproceedings{nagoudi-etal-2021-indt5, title = "{I}nd{T}5: A Text-to-Text Transformer for 10 Indigenous Languages", author = "Nagoudi, El Moatez Billah and Chen, Wei-Rui and Abdul-Mageed, Muhammad and Cavusoglu, Hasan", booktitle = "Proceedings of the First Workshop on Natural Language Processing for Indigenous Languages of the Americas", month = jun, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.americasnlp-1.30", doi = "10.18653/v1/2021.americasnlp-1.30", pages = "265--271" } ```
huggingtweets/_pranavnt
huggingtweets
2021-08-30T21:04:43Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/_pranavnt/1630357478814/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1414887427706023940/TxmPt4j1_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Pranav ⠕</div> <div style="text-align: center; font-size: 14px;">@_pranavnt</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Pranav ⠕. | Data | Pranav ⠕ | | --- | --- | | Tweets downloaded | 406 | | Retweets | 86 | | Short tweets | 86 | | Tweets kept | 234 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1si2997p/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_pranavnt's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3b5uv7sf) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3b5uv7sf/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/_pranavnt') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
trig/multiverse-second
trig
2021-08-30T20:15:56Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- tags: - conversational --- # multiverse but with swapped characters and more learning
nreimers/MiniLM-L6-H384-uncased
nreimers
2021-08-30T20:05:29Z
1,993
34
transformers
[ "transformers", "pytorch", "jax", "bert", "feature-extraction", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- license: mit --- ## MiniLM: 6 Layer Version This is a 6 layer version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased/) by keeping only every second layer.
huggingtweets/hideo_kojima_en-rxmaybike
huggingtweets
2021-08-30T17:40:33Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/hideo_kojima_en-rxmaybike/1630345229826/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/914211724412166144/Bf2Yij9b_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1409559937445990403/9bkJBvX9_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">HIDEO_KOJIMA & jamar "mad dog of ny" majima 🇵🇸</div> <div style="text-align: center; font-size: 14px;">@hideo_kojima_en-rxmaybike</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from HIDEO_KOJIMA & jamar "mad dog of ny" majima 🇵🇸. | Data | HIDEO_KOJIMA | jamar "mad dog of ny" majima 🇵🇸 | | --- | --- | --- | | Tweets downloaded | 3228 | 3166 | | Retweets | 2656 | 1404 | | Short tweets | 29 | 432 | | Tweets kept | 543 | 1330 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3nd0jitx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hideo_kojima_en-rxmaybike's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3digtvss) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3digtvss/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/hideo_kojima_en-rxmaybike') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
AdapterHub/bert-base-uncased-pf-ud_en_ewt
AdapterHub
2021-08-30T15:54:13Z
1
0
adapter-transformers
[ "adapter-transformers", "bert", "adapterhub:dp/ud_ewt", "en", "dataset:universal_dependencies", "region:us" ]
null
2022-03-02T23:29:04Z
--- tags: - bert - adapterhub:dp/ud_ewt - adapter-transformers datasets: - universal_dependencies language: - en --- # Adapter `AdapterHub/bert-base-uncased-pf-ud_en_ewt` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [dp/ud_ewt](https://adapterhub.ml/explore/dp/ud_ewt/) dataset and includes a prediction head for dependency parsing. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-ud_en_ewt", source="hf", set_active=True) ``` ## Architecture & Training This adapter was trained using adapter-transformer's example script for dependency parsing. See https://github.com/Adapter-Hub/adapter-transformers/tree/master/examples/dependency-parsing. ## Evaluation results Scores achieved by dependency parsing adapters on the test set of UD English EWT after training: | Model | UAS | LAS | | --- | --- | --- | | `bert-base-uncased` | 91.74 | 89.15 | | `roberta-base` | 91.43 | 88.43 | ## Citation <!-- Add some description here -->
vasudevgupta/gsoc-wav2vec2-xlsr-53
vasudevgupta
2021-08-30T07:38:48Z
4
0
transformers
[ "transformers", "tf", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
TensorFlow equivalent of [`facebook/wav2vec2-large-xlsr-53`](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
vasudevgupta/gsoc-wav2vec2-robust
vasudevgupta
2021-08-30T07:34:01Z
5
1
transformers
[ "transformers", "tf", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
TensorFlow equivalent of [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust)
uhhlt/bert-based-uncased-hatespeech-movies
uhhlt
2021-08-29T21:42:02Z
6
3
transformers
[ "transformers", "tf", "bert", "text-classification", "en", "arxiv:2108.10724", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en tag: text-classification datasets: - twitter - movies subtitles --- # bert-based-uncased-hatespeech-movies: A hatespeech model used to classify text as **normal**, **offensive**, **hatespeech** in Movie subtitles. The model is initially a pre-trained transformer model(bert-based-uncased) which is further trained on Twitter comments which can be normal, offensive and hate to learn the context from social media data. It is then fine-tuned using the movie subtitles dataset. Please check our paper and if used please cite ``` @article{von2021hateful, title={How Hateful are Movies? A Study and Prediction on Movie Subtitles}, author={von Boguszewski, Niklas and Moin, Sana and Bhowmick, Anirban and Yimam, Seid Muhie and Biemann, Chris}, journal={arXiv preprint arXiv:2108.10724}, year={2021} } ``` The dataset and models are available on https://github.com/uhh-lt/hatespeech
Ann2020/distilbert-base-uncased-finetuned-ner
Ann2020
2021-08-29T21:13:47Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model_index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metric: name: Accuracy type: accuracy value: 0.984018301110458 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0609 - Precision: 0.9275 - Recall: 0.9365 - F1: 0.9320 - Accuracy: 0.9840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2527 | 1.0 | 878 | 0.0706 | 0.9120 | 0.9181 | 0.9150 | 0.9803 | | 0.0517 | 2.0 | 1756 | 0.0603 | 0.9174 | 0.9349 | 0.9261 | 0.9830 | | 0.031 | 3.0 | 2634 | 0.0609 | 0.9275 | 0.9365 | 0.9320 | 0.9840 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
Aloka/mbart50-ft-si-en
Aloka
2021-08-29T13:11:14Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer model_index: - name: mbart50-ft-si-en results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart50-ft-si-en This model was trained from scratch on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 5.0476 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.98 | 30 | 5.6367 | | No log | 1.98 | 60 | 4.1221 | | No log | 2.98 | 90 | 3.1880 | | No log | 3.98 | 120 | 3.1175 | | No log | 4.98 | 150 | 3.3575 | | No log | 5.98 | 180 | 3.7855 | | No log | 6.98 | 210 | 4.3530 | | No log | 7.98 | 240 | 4.7216 | | No log | 8.98 | 270 | 4.9202 | | No log | 9.98 | 300 | 5.0476 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.6.0 - Datasets 1.11.0 - Tokenizers 0.10.3
j-hartmann/emotion-english-roberta-large
j-hartmann
2021-08-29T11:48:09Z
1,644
14
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "sentiment", "emotion", "twitter", "reddit", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: "en" tags: - roberta - sentiment - emotion - twitter - reddit widget: - text: "Oh wow. I didn't know that." - text: "This movie always makes me cry.." - text: "Oh Happy Day" --- ## Description ℹ With this model, you can classify emotions in English text data. The model was trained on 6 diverse datasets and predicts Ekman's 6 basic emotions, plus a neutral class: 1) anger 🤬 2) disgust 🤢 3) fear 😨 4) joy 😀 5) neutral 😐 6) sadness 😭 7) surprise 😲 The model is a fine-tuned checkpoint of [RoBERTa-large](https://huggingface.co/roberta-large). For further details on this emotion model, please refer to the model card of its [DistilRoBERTa](https://huggingface.co/j-hartmann/emotion-english-distilroberta-base) version.
jean-paul/kinyaRoberta-small
jean-paul
2021-08-29T10:27:01Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# Model description A Pretrained model on the Kinyarwanda language dataset using a masked language modeling (MLM) objective. RoBerta model was first introduced in [this paper](https://arxiv.org/abs/1907.11692). This KinyaRoBERTa model was pretrained with uncased tokens which means that no difference between for example ikinyarwanda and Ikinyarwanda. # Training parameters #### Dataset The data set used has both sources from the new articles in Rwanda extracted from different new web pages, dumped Wikipedia files, and the books in Kinyarwanda. The sizes of the sources of data are 72 thousand new articles, three thousand dumped Wikipedia articles, and six books with more than a thousand pages. #### Hyperparameters The model was trained with the default configuration of RoBerta and Trainer from the Huggingface. However, due to some resource computation issues, we kept the number of transformer layers to 6. # How to use: The model can be used directly with the pipeline for masked language modeling as follows: ``` from transformers import pipeline the_mask_pipe = pipeline( "fill-mask", model='jean-paul/kinyaRoberta-small', tokenizer='jean-paul/kinyaRoberta-small', ) the_mask_pipe("Ejo ndikwiga nagize <mask> baje kunsura.") [{'sequence': 'Ejo ndikwiga nagize amahirwe baje kunsura.', 'score': 0.3530674874782562, 'token': 1711, 'token_str': ' amahirwe'}, {'sequence': 'Ejo ndikwiga nagize ubwoba baje kunsura.', 'score': 0.2858319878578186, 'token': 2594, 'token_str': ' ubwoba'}, {'sequence': 'Ejo ndikwiga nagize ngo baje kunsura.', 'score': 0.032475441694259644, 'token': 396, 'token_str': ' ngo'}, {'sequence': 'Ejo ndikwiga nagize abana baje kunsura.', 'score': 0.029481062665581703, 'token': 739, 'token_str': ' abana'}, {'sequence': 'Ejo ndikwiga nagize abantu baje kunsura.', 'score': 0.016263306140899658, 'token': 500, 'token_str': ' abantu'}] ``` 2) Direct use from the transformer library to get features using AutoModel ``` from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("jean-paul/kinyaRoberta-small") model = AutoModelForMaskedLM.from_pretrained("jean-paul/kinyaRoberta-small") input_text = "Ejo ndikwiga nagize abashyitsi baje kunsura." encoded_input = tokenizer(input_text, return_tensors='pt') output = model(**encoded_input) ``` __Note__: We used the huggingface implementations for pretraining RoBerta from scratch, both the RoBerta model and the classes needed to do it.
Harshal6927/Tony_Stark_GPT
Harshal6927
2021-08-29T07:39:33Z
7
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- tags: - conversational --- # Tony Stark GPT My first AI model still learning, used small dataset so don't expect much
huggingtweets/mullbot_forever
huggingtweets
2021-08-29T05:36:32Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/mullbot_forever/1630215387933/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1334794074822504449/KX8oD2AU_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">extremely online bot</div> <div style="text-align: center; font-size: 14px;">@mullbot_forever</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from extremely online bot. | Data | extremely online bot | | --- | --- | | Tweets downloaded | 1432 | | Retweets | 0 | | Short tweets | 22 | | Tweets kept | 1410 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/301sf9tj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mullbot_forever's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2u7gvuie) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2u7gvuie/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/mullbot_forever') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Tejasvb/DialoGPT-small-rick
Tejasvb
2021-08-29T05:05:19Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- tags: - conversational ---
filco306/gpt2-switchboard-paraphraser
filco306
2021-08-28T23:33:47Z
6
0
transformers
[ "transformers", "pytorch", "text-generation", "arxiv:2010.05700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# GPT2 Switchboard style transfer paraphraser This is the trained Switchboard-model from the paper [Reformulating Unsupervised Style Transfer as Paraphrase Generation](https://arxiv.org/abs/2010.05700) by Krishna K. et al. Note that I (the uploader) am not the author of the paper. Permission to upload to Huggingface was given by the main author. ## Citation If you found this model useful, please cite the original work: ``` @inproceedings{style20, author={Kalpesh Krishna and John Wieting and Mohit Iyyer}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2020", Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation}, } ```
filco306/gpt2-base-style-paraphraser
filco306
2021-08-28T19:27:41Z
7
4
transformers
[ "transformers", "pytorch", "text-generation", "arxiv:2010.05700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# GPT2 base style transfer paraphraser This is the trained base-model from the paper [Reformulating Unsupervised Style Transfer as Paraphrase Generation](https://arxiv.org/abs/2010.05700) by Krishna K. et al. Note that I (the uploader) am not the author of the paper. Permission to upload to Huggingface was given by the main author. ## Citation If you found this model useful, please cite the original work: ``` @inproceedings{style20, author={Kalpesh Krishna and John Wieting and Mohit Iyyer}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2020", Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation}, } ```