modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
TalTechNLP/espnet2_estonian | TalTechNLP | 2021-09-29T07:36:28Z | 7 | 1 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"et",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: et
license: cc-by-4.0
---
# Estonian Espnet2 ASR model
## Model description
This is a general-purpose Estonian ASR model trained in the Lab of Language Technology at TalTech.
## Intended uses & limitations
This model is intended for general-purpose speech recognition, such as broadcast conversations, interviews, talks, etc.
## How to use
```python
from espnet2.bin.asr_inference import Speech2Text
model = Speech2Text.from_pretrained(
"TalTechNLP/espnet2_estonian",
lm_weight=0.6, ctc_weight=0.4, beam_size=60
)
# read a sound file with 16k sample rate
import soundfile
speech, rate = soundfile.read("speech.wav")
assert rate == 16000
text, *_ = model(speech)
print(text[0])
```
#### Limitations and bias
Since this model was trained on mostly broadcast speech and texts from the web, it might have problems correctly decoding the following:
* Speech containing technical and other domain-specific terms
* Children's speech
* Non-native speech
* Speech recorded under very noisy conditions or with a microphone far from the speaker
* Very spontaneous and overlapping speech
## Training data
Acoustic training data:
| Type | Amount (h) |
|-----------------------|:------:|
| Broadcast speech | 591 |
| Spontaneous speech | 53 |
| Elderly speech corpus | 53 |
| Talks, lectures | 49 |
| Parliament speeches | 31 |
| *Total* | *761* |
Language model training data:
* Estonian National Corpus 2019
* OpenSubtitles
* Speech transcripts
## Training procedure
Standard EspNet2 Conformer recipe.
## Evaluation results
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/aktuaalne2021.testset|2864|56575|93.1|4.5|2.4|2.0|8.9|63.4|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/jutusaated.devset|273|4677|93.9|3.6|2.4|1.2|7.3|46.5|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/jutusaated.testset|818|11093|94.7|2.7|2.5|0.9|6.2|45.0|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/www-trans.devset|1207|13865|82.3|8.5|9.3|3.4|21.2|74.1|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/www-trans.testset|1648|22707|86.4|7.6|6.0|2.5|16.1|75.7|
### BibTeX entry and citation info
#### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
``` |
huggingtweets/balcobops-liyrex_irl-waitforgot | huggingtweets | 2021-09-29T04:04:44Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/balcobops-liyrex_irl-waitforgot/1632888280266/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1379189650556911619/EiZklugS_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1441278016164818955/T-PDXXvg_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1438447321604313089/5_lZmeyb_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Wait Forgot & Balco - Special Boperative & Liyrex</div>
<div style="text-align: center; font-size: 14px;">@balcobops-liyrex_irl-waitforgot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Wait Forgot & Balco - Special Boperative & Liyrex.
| Data | Wait Forgot | Balco - Special Boperative | Liyrex |
| --- | --- | --- | --- |
| Tweets downloaded | 3194 | 1171 | 3189 |
| Retweets | 1294 | 129 | 1587 |
| Short tweets | 285 | 122 | 279 |
| Tweets kept | 1615 | 920 | 1323 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/371suxoa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @balcobops-liyrex_irl-waitforgot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/bj54dpp8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/bj54dpp8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/balcobops-liyrex_irl-waitforgot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/imjackrudd | huggingtweets | 2021-09-28T23:31:37Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/imjackrudd/1632871893609/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1289653820071522304/cdikNvkG_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jack Rudd 🇹🇹 🏳️⚧️</div>
<div style="text-align: center; font-size: 14px;">@imjackrudd</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jack Rudd 🇹🇹 🏳️⚧️.
| Data | Jack Rudd 🇹🇹 🏳️⚧️ |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 55 |
| Short tweets | 327 |
| Tweets kept | 2864 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3g5589wt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @imjackrudd's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/eyywpszu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/eyywpszu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/imjackrudd')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/dndomme | huggingtweets | 2021-09-28T23:14:56Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/dndomme/1632870893354/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1428106877926260736/xiq2bdMI_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pirate Queen Grey</div>
<div style="text-align: center; font-size: 14px;">@dndomme</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pirate Queen Grey.
| Data | Pirate Queen Grey |
| --- | --- |
| Tweets downloaded | 3218 |
| Retweets | 1329 |
| Short tweets | 288 |
| Tweets kept | 1601 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ucgtv6r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dndomme's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1sej7nbm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1sej7nbm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dndomme')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/laura_the_loser | huggingtweets | 2021-09-28T22:31:52Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/laura_the_loser/1632868308444/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1405044989013364744/OowZLyUZ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Laura UwU</div>
<div style="text-align: center; font-size: 14px;">@laura_the_loser</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Laura UwU.
| Data | Laura UwU |
| --- | --- |
| Tweets downloaded | 126 |
| Retweets | 22 |
| Short tweets | 34 |
| Tweets kept | 70 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/kpebddab/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @laura_the_loser's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1jsq6074) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1jsq6074/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/laura_the_loser')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
rhtnr/erhthh | rhtnr | 2021-09-28T21:18:25Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | https://escape-net.eu/groups/film-complet-venom-let-there-be-carnage-streaming-vf-gratuit-en-francais/
https://escape-net.eu/groups/venom-let-there-be-carnage-2021-streaming-vf-film-complet-en-francais/
https://escape-net.eu/groups/venom-let-there-be-carnage-streaming-vf-en-hd-fr/
https://escape-net.eu/groups/venom-let-there-be-carnage-streaming-vf-film-complet-2021-en-francais-hd/
https://escape-net.eu/groups/streaming-hd-venom-let-there-be-carnage-2021-en-streaming-vf-complet-gratuit-francais/
https://escape-net.eu/groups/vostfr-venom-let-there-be-carnage-2021-film-complet-streaming-vf-en-francais-09-27-2021/
https://escape-net.eu/groups/regarder-venom-let-there-be-carnage-streaming-vf-gratuit-en-francais-27-septembre-2021/
https://escape-net.eu/groups/regarder-venom-let-there-be-carnage-streaming-vf-2021-en-francais/
https://escape-net.eu/groups/venom-let-there-be-carnage-2021-film-complet-streaming-vf/
https://escape-net.eu/groups/venom-let-there-be-carnage-streaming-vf-2021-film-complet-415303912/
https://parentsolo31.com/advert/film-complet-mourir-peut-attendre-streaming-vf-gratuit-en-francais/
https://parentsolo31.com/advert/mourir-peut-attendre-2021-streaming-vf-film-complet-en-francais/
https://parentsolo31.com/advert/mourir-peut-attendre-streaming-vf-en-hd-fr/
https://parentsolo31.com/advert/mourir-peut-attendre-streaming-vf-film-complet-2021-en-francais-hd/
https://parentsolo31.com/advert/streaming-hd-mourir-peut-attendre-2021-en-streaming-vf-complet-gratuit-francais/
https://parentsolo31.com/advert/vostfr-mourir-peut-attendre-2021-film-complet-streaming-vf-en-francais-09-27-2021/
https://parentsolo31.com/advert/regarder-mourir-peut-attendre-streaming-vf-gratuit-en-francais-27-septembre-2021/
https://parentsolo31.com/advert/regarder-mourir-peut-attendre-streaming-vf-2021-en-francais/
https://parentsolo31.com/advert/mourir-peut-attendre-2021-film-complet-streaming-vf/
https://parentsolo31.com/advert/mourir-peut-attendre-streaming-vf-2021-film-complet/
https://clubdeportivocdl.com/advert/voir-dune-streaming-vf-2021-complet/
https://clubdeportivocdl.com/advert/film-complet-dune-streaming-vf-gratuit-en-francais/
https://clubdeportivocdl.com/advert/dune-streaming-vf-film-complet-2021-en-francais-hd/
https://clubdeportivocdl.com/advert/dune-2021-hd-film-complet-vf-francais/
https://clubdeportivocdl.com/advert/dune-streaming-vf-2021-film-complet/
https://clubdeportivocdl.com/advert/dune-2021-film-streaming-vf-streaming-vostfr/
https://clubdeportivocdl.com/advert/dune-2021-film-complet-streaming-vf/
https://clubdeportivocdl.com/advert/regarder-dune-2021-film-streaming-vf-francais/
https://clubdeportivocdl.com/advert/regarder-dune-2021-streaming-vf-en-francais/
https://clubdeportivocdl.com/advert/regarder-dune-2021-film-complet-streaming-vf/
https://clubdeportivocdl.com/advert/dune-film-complet-en-streaming-vf/
https://clubdeportivocdl.com/advert/film-complet-dune-2021-vf-streaming-francais/
https://clubdeportivocdl.com/advert/regarder-dune-vostfr-en-streaming-vf-gratuit-complet-hd-en-francais/
https://clubdeportivocdl.com/advert/filmcomplet-dune-2021-streaming-vf-en-complet-gratuit/
https://clubdeportivocdl.com/advert/regarder-dune-2021-film-complet-en-vf-hd-streaming/
https://clubdeportivocdl.com/advert/regarder-complet-dune-streaming-vf-film-gratuit/
https://clubdeportivocdl.com/advert/film-complet-dune-streaming-vf-complet-2021-francais-hd/
https://clubdeportivocdl.com/advert/vf-dune-streaming-vf-gratuit-en-francais-2021/
https://clubdeportivocdl.com/advert/film-hd-dune-2021-streaming-vf-gratuit-complet/
https://clubdeportivocdl.com/advert/film-complet-dune-2021-streaming-vf-en-fr/
https://clubdeportivocdl.com/advert/regarder-dune-2021-film-complet-streaming-vf-2/
https://clubdeportivocdl.com/advert/voir-dune-2021-streaming-vf-complet-en-gratuit/
https://clubdeportivocdl.com/advert/mourir-peut-attendre-streaming-vf-film-complet-2021-en-francais-hd/
https://clubdeportivocdl.com/advert/mourir-peut-attendre-streaming-vf-2021-film-complet/
https://www.onfeetnation.com/profiles/blogs/linkfilm-streaming-vf-complet-en-francais?xg_source=activity
http://snomoto.com/forums/timbersled/linkfilm-streaming-vf-complet-en-francais/
https://sites.google.com/view/dnjyrjmt/halaman-muka
https://escape-net.eu/groups/linkfilm-complet-vf/ |
lewtun/bert-base-uncased-finetuned-imdb | lewtun | 2021-09-28T20:45:38Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: bert-base-uncased-finetuned-imdb
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: imdb
type: imdb
args: plain_text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2244 | 1.0 | 958 | 2.0726 |
| 2.1537 | 2.0 | 1916 | 2.0381 |
| 2.1183 | 3.0 | 2874 | 2.0284 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.1+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
lewtun/MiniLM-L12-H384-uncased-finetuned-imdb | lewtun | 2021-09-28T18:59:38Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: MiniLM-L12-H384-uncased-finetuned-imdb
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: imdb
type: imdb
args: plain_text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLM-L12-H384-uncased-finetuned-imdb
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.2464 | 1.0 | 391 | 4.2951 |
| 4.2302 | 2.0 | 782 | 4.0023 |
| 4.0726 | 3.0 | 1173 | 3.9328 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.1+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
shahrukhx01/roberta-base-squad2-boolq-baseline | shahrukhx01 | 2021-09-28T18:18:26Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ## Multiple Prediction Heads
* ExtractiveQA Head
* Three Class Classification Head, classes => (yes, no, extra_qa) to answer binary questions or direct to ExtractiveQA Head
## BoolQ Validation dataset Evaluation: <br/>
support => 3270 <br/>
accuracy => 0.73 <br/>
macro f1 => 0.71
## SQuAD Validation dataset Evaluation: <br/>
eval_HasAns_exact = 78.0196 <br/>
eval_HasAns_f1 = 84.0327 <br/>
eval_HasAns_total = 5928 <br/>
eval_NoAns_exact = 81.8167 <br/>
eval_NoAns_f1 = 81.8167 <br/>
eval_NoAns_total = 5945 <br/>
eval_best_exact = 79.9208 <br/>
eval_best_f1 = 82.9231 <br/>
eval_exact = 79.9208 <br/>
eval_f1 = 82.9231 <br/>
eval_samples = 12165 <br/>
eval_total = 11873
## Uasge in transformers
Import the script from [here](https://huggingface.co/shahrukhx01/roberta-base-squad2-boolq-baseline/blob/main/multitask_model.py)
```python
from multitask_model import RobertaForMultitaskQA
from transformers import RobertaTokenizerFast
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = RobertaForMultitaskQA.from_pretrained(
"shahrukhx01/roberta-base-squad2-boolq-baseline",
task_labels_map={"squad_v2": 2, "boolq": 3},
).to(device)
tokenizer = RobertaTokenizerFast.from_pretrained("shahrukhx01/roberta-base-squad2-boolq-baseline")
``` |
sgugger/marian-finetuned-kde4-en-to-fr | sgugger | 2021-09-28T13:47:35Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 53.2503
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8666
- Bleu: 53.2503
- Gen Len: 14.7005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
huggingtweets/maxfleit-sahil | huggingtweets | 2021-09-28T07:28:47Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1323696590947831810/4LwHUmkU_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1355219248998535169/aprSSrFr_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Maximilian Fleitmann & sahil</div>
<div style="text-align: center; font-size: 14px;">@maxfleit-sahil</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Maximilian Fleitmann & sahil.
| Data | Maximilian Fleitmann | sahil |
| --- | --- | --- |
| Tweets downloaded | 1161 | 3018 |
| Retweets | 18 | 2505 |
| Short tweets | 25 | 108 |
| Tweets kept | 1118 | 405 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2mud364f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @maxfleit-sahil's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3v52vz1s) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3v52vz1s/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/maxfleit-sahil')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/elizgerber-galaxykate-ianhorswill | huggingtweets | 2021-09-27T22:54:21Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/elizgerber-galaxykate-ianhorswill/1632783257334/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1371914197555105794/OKpRjt66_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1790733507/me-cc_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2828021100/bfce2ad653f8d49d2ebf984b620df18b_400x400.jpeg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dr Kate Compton, Code Wizard & Ian Horswill & Liz Gerber</div>
<div style="text-align: center; font-size: 14px;">@elizgerber-galaxykate-ianhorswill</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dr Kate Compton, Code Wizard & Ian Horswill & Liz Gerber.
| Data | Dr Kate Compton, Code Wizard | Ian Horswill | Liz Gerber |
| --- | --- | --- | --- |
| Tweets downloaded | 3242 | 179 | 1622 |
| Retweets | 607 | 35 | 545 |
| Short tweets | 214 | 6 | 34 |
| Tweets kept | 2421 | 138 | 1043 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1dyol8xs/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elizgerber-galaxykate-ianhorswill's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/37pdtbyk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/37pdtbyk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elizgerber-galaxykate-ianhorswill')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
vesteinn/word2vec_is_rmh | vesteinn | 2021-09-27T22:08:20Z | 0 | 0 | null | [
"is",
"license:agpl-3.0",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
license: agpl-3.0
language:
- is
---
# word2vec model trained on Icelandic
This model is trained on the lemmas of the Icelandic Gigaword Corpus version 20.05. It is trained using the gensim package, version 4.1.0. and parameters were set to default (100 dimensions, windows size 5)
This model can not be loaded directly since it uses gensim, clone the repository and run the following to use it.
```python
import gensim
model = gensim.models.Word2Vec.load("./rmh.w2v.model")
```
## Example output
```bash
In [6]: model.wv.most_similar("england")
Out[6]:
[('wales', 0.8113704323768616),
('skotland', 0.7611601948738098),
('bretlandseyjar', 0.7280426621437073),
('gateshead', 0.6975484490394592),
('ástralía', 0.6963852047920227),
('eastbourne', 0.6939234137535095),
('englandi', 0.6908402442932129),
('bath', 0.6849308013916016),
('lynndie', 0.6826340556144714),
('glasgow', 0.6815919876098633)]
In [7]: model.wv.most_similar("ísland")
Out[7]:
[('norðurlönd', 0.6843729615211487),
('land', 0.6696498990058899),
('íslendingur', 0.6645756959915161),
('íslenskur', 0.6627770662307739),
('hérlendis', 0.6609933376312256),
('íslandi', 0.6514216661453247),
('evrópa', 0.6289927959442139),
('fróðskaparsetur', 0.6046777367591858),
('evrópuland', 0.5911464095115662),
('bandaríkin', 0.5906434655189514)]
```
|
huggingtweets/lemonjellyhats | huggingtweets | 2021-09-27T20:44:01Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/lemonjellyhats/1632775437870/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1388380421780611074/njQLSVzW_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">tanisha</div>
<div style="text-align: center; font-size: 14px;">@lemonjellyhats</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from tanisha.
| Data | tanisha |
| --- | --- |
| Tweets downloaded | 791 |
| Retweets | 241 |
| Short tweets | 19 |
| Tweets kept | 531 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/xyrf4cur/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lemonjellyhats's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1yfq9m4c) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1yfq9m4c/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lemonjellyhats')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
colorfulscoop/gpt2-small-ja | colorfulscoop | 2021-09-27T11:50:17Z | 89 | 4 | transformers | [
"transformers",
"pytorch",
"tf",
"gpt2",
"text-generation",
"ja",
"dataset:wikipedia",
"license:cc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: ja
datasets: wikipedia
widget:
- text: 統計的機械学習でのニューラルネットワーク
license: cc
---
# GPT-2 small Japanese model
This repository contains a GPT2-small model trained on Japanese Wikipedia dataset.
## Training data
[Japanese Wikipedia](https://ja.wikipedia.org/wiki/Wikipedia:データベースダウンロード) dataset as of Aug20, 2021 released under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) is used for both tokenizer and GPT-2 model.
We splitted the dataset into three subsets - train, valid and test sets. Both tokenizer and model were trained on the train set.
Train set contains around 540M tokens.
## Model description
The model architecture is the same as GPT-2 small model (n_ctx: 1024, n_embd 768, n_head: 12, n_layer: 12) except for a vocabulary size.
The vocabulary size is set to 32,000 instead of an original size of 50,257.
`transformers.GPT2LMHeadModel` is used for training.
## Tokenizer description
[SentencePiece](https://github.com/google/sentencepiece) is used as a tokenizer for this model.
We utilized 1,000,000 sentences from train set.
The vocabulary size was 32,000.
A `add_dummy_prefix` option was set to `True` because Japanese words are not separated by whitespaces.
After training, the tokenizer model was imported as `transformers.BERTGenerationTokenizer`
because it supports SentencePiece models and it does not add any special tokens as default,
which is useful expecially for a text generation task.
## Training
The model was trained on the train set for 30 epochs with batch size 32. Each sample contained 1024 tokens.
We utilized Adam optimizer. Learning rate was linearly increased from `0` to `1e-4` during the first 10,000 steps.
A clip norm was set to `1.0`.
Test set perplexity of the trained model was 29.13.
Please refer to [GitHub](https://github.com/colorfulscoop/gpt-ja) for more training details.
## Usage
First, install dependecies.
```sh
$ pip install transformers==4.10.0 torch==1.8.1 sentencepiece==0.1.96
```
Then use pipeline to generate sentences.
```sh
>>> import transformers
>>> pipeline = transformers.pipeline("text-generation", "colorfulscoop/gpt2-small-ja")
>>> pipeline("統計的機械学習でのニューラルネットワーク", do_sample=True, top_p=0.95, top_k=50, num_return_sequences=3)
```
**Note:** The default model configuration `config.json` sets parameters for text generation with `do_sample=True`, `top_k=50`, `top_p=0.95`.
Please set these parameters when you need to use different parameters.
## Versions
We recommend to specify `revision` to load the model for reproducibility.
| Revision | Date of Wikipedia dump |
| --- | --- |
| 20210820.1.0 | Aug 20, 2021 |
| 20210301.1.0 | March 1, 2021 |
You can specify `revision` as follows.
```py
# Example of pipeline
>>> transformers.pipeline("text-generation", "colorfulscoop/gpt2-small-ja", revision="20210820.1.0")
# Example of AutoModel
>>> transformers.AutoModel.from_pretrained("colorfulscoop/gpt2-small-ja", revision="20210820.1.0")
```
## License
All the models included in this repository are licensed under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
**Disclaimer:** The model potentially has possibility that it generates similar texts in the training data, texts not to be true, or biased texts. Use of the model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output.
**Author:** Colorful Scoop
|
Unbabel/gec-t5_small | Unbabel | 2021-09-27T11:27:48Z | 287 | 23 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"grammatical error correction",
"text2text",
"en",
"dataset:clang-8",
"dataset:conll-14",
"dataset:conll-13",
"arxiv:2106.03830",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- grammatical error correction
- text2text
- t5
license: apache-2.0
datasets:
- clang-8
- conll-14
- conll-13
metrics:
- f0.5
---
This model is an implementation of the paper [A Simple Recipe for Multilingual Grammatical Error Correction](https://arxiv.org/pdf/2106.03830.pdf) from Google where they report the State of the art score in the task of Grammatical Error Correction (GEC).
We implement the version with the T5-small with the reported F_0.5 score in the paper (60.70).
To effectively use the "Hosted inference API", write "gec: [YOUR SENTENCE HERE]".
In order to use the model, look at the following snippet:
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
model = T5ForConditionalGeneration.from_pretrained("Unbabel/gec-t5_small")
tokenizer = T5Tokenizer.from_pretrained('t5-small')
sentence = "I like to swimming"
tokenized_sentence = tokenizer('gec: ' + sentence, max_length=128, truncation=True, padding='max_length', return_tensors='pt')
corrected_sentence = tokenizer.decode(
model.generate(
input_ids = tokenized_sentence.input_ids,
attention_mask = tokenized_sentence.attention_mask,
max_length=128,
num_beams=5,
early_stopping=True,
)[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=True
)
print(corrected_sentence) # -> I like swimming.
``` |
vppvgit/BiblItBERT-1 | vppvgit | 2021-09-27T09:40:47Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: BiblItBERT-1
results:
- task:
name: Masked Language Modeling
type: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiblItBERT-1
This model is a fine-tuned version of [vppvgit/BiblItBERT](https://huggingface.co/vppvgit/BiblItBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.5764 | 1.0 | 16528 | 1.5214 |
| 1.4572 | 2.0 | 33056 | 1.4201 |
| 1.3787 | 3.0 | 49584 | 1.3728 |
| 1.3451 | 4.0 | 66112 | 1.3245 |
| 1.3066 | 5.0 | 82640 | 1.2614 |
| 1.2447 | 6.0 | 99168 | 1.2333 |
| 1.2172 | 7.0 | 115696 | 1.2149 |
| 1.2079 | 8.0 | 132224 | 1.1853 |
| 1.2167 | 9.0 | 148752 | 1.1586 |
| 1.2056 | 10.0 | 165280 | 1.1503 |
| 1.1307 | 11.0 | 181808 | 1.1224 |
| 1.1689 | 12.0 | 198336 | 1.1074 |
| 1.1007 | 13.0 | 214864 | 1.0924 |
| 1.0901 | 14.0 | 231392 | 1.0659 |
| 1.0667 | 15.0 | 247920 | 1.0650 |
| 1.0434 | 16.0 | 264448 | 1.0362 |
| 1.0333 | 17.0 | 280976 | 1.0250 |
| 1.0342 | 18.0 | 297504 | 1.0198 |
| 1.0059 | 19.0 | 314032 | 0.9950 |
| 0.9719 | 20.0 | 330560 | 0.9836 |
| 0.9863 | 21.0 | 347088 | 0.9873 |
| 0.9781 | 22.0 | 363616 | 0.9724 |
| 0.9369 | 23.0 | 380144 | 0.9599 |
| 0.9578 | 24.0 | 396672 | 0.9557 |
| 0.9253 | 25.0 | 413200 | 0.9400 |
| 0.9441 | 26.0 | 429728 | 0.9222 |
| 0.9138 | 27.0 | 446256 | 0.9140 |
| 0.882 | 28.0 | 462784 | 0.9045 |
| 0.864 | 29.0 | 479312 | 0.8880 |
| 0.8632 | 30.0 | 495840 | 0.9023 |
| 0.8342 | 32.0 | 528896 | 0.8740 |
| 0.8037 | 34.0 | 561952 | 0.8647 |
| 0.8119 | 37.0 | 611536 | 0.8358 |
| 0.8011 | 38.0 | 628064 | 0.8252 |
| 0.786 | 39.0 | 644592 | 0.8228 |
| 0.7697 | 41.0 | 677648 | 0.8138 |
| 0.7485 | 42.0 | 694176 | 0.8104 |
| 0.7689 | 43.0 | 710704 | 0.8018 |
| 0.7401 | 45.0 | 743760 | 0.7957 |
| 0.7031 | 47.0 | 776816 | 0.7726 |
| 0.7578 | 48.0 | 793344 | 0.7864 |
| 0.7298 | 49.0 | 809872 | 0.7775 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
microsoft/layoutlm-base-cased | microsoft | 2021-09-27T05:55:31Z | 53,140 | 17 | transformers | [
"transformers",
"pytorch",
"layoutlm",
"arxiv:1912.13318",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | # LayoutLM
**Multimodal (text + layout/format + image) pre-training for document AI**
[Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://aka.ms/layoutlm)
## Model description
LayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
[LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318)
Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, [KDD 2020](https://www.kdd.org/kdd2020/accepted-papers)
## Different Tokenizer
Note that LayoutLM-Cased requires a different tokenizer, based on RobertaTokenizer. You can
initialize it as follows:
~~~
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('microsoft/layoutlm-base-cased')
~~~
## Citation
If you find LayoutLM useful in your research, please cite the following paper:
``` latex
@misc{xu2019layoutlm,
title={LayoutLM: Pre-training of Text and Layout for Document Image Understanding},
author={Yiheng Xu and Minghao Li and Lei Cui and Shaohan Huang and Furu Wei and Ming Zhou},
year={2019},
eprint={1912.13318},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
huggingtweets/cptpete-tweetwhelan | huggingtweets | 2021-09-27T05:42:37Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/cptpete-tweetwhelan/1632721354188/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1272386649134002183/paocjwdV_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2757772202/4cc42af7c05cb9738c1794978c54999a_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pete Whelan & Pete Whelan</div>
<div style="text-align: center; font-size: 14px;">@cptpete-tweetwhelan</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pete Whelan & Pete Whelan.
| Data | Pete Whelan | Pete Whelan |
| --- | --- | --- |
| Tweets downloaded | 62 | 128 |
| Retweets | 10 | 8 |
| Short tweets | 3 | 11 |
| Tweets kept | 49 | 109 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/314r1lav/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cptpete-tweetwhelan's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2llpl54p) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2llpl54p/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cptpete-tweetwhelan')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nateraw/audio-test | nateraw | 2021-09-27T03:45:48Z | 0 | 0 | generic | [
"generic",
"audio-to-audio",
"region:us"
] | audio-to-audio | 2022-03-02T23:29:05Z | ---
tags:
- audio-to-audio
library_name: generic
---
|
nateraw/timm-resnet18-imagenette-160px-5-epochs | nateraw | 2021-09-27T01:16:41Z | 7 | 0 | timm | [
"timm",
"pytorch",
"image-classification",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for timm-resnet18-imagenette-160px-5-epochs |
r3cdhummingbird/DialoGPT-medium-joshua | r3cdhummingbird | 2021-09-26T15:01:25Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
thumbnail: https://raw.githubusercontent.com/RuolinZheng08/twewy-discord-chatbot/main/gif-demo/icon.png
tags:
- conversational
license: mit
---
# DialoGPT Trained on the Speech of a Game Character
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/T3879/Joshua-Bot_Model/tree/main/twewy-discord-chatbot-main)
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3cdhummingbird/DialoGPT-medium-joshua")
model = AutoModelWithLMHead.from_pretrained("r3cdhummingbird/DialoGPT-medium-joshua")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
huggingtweets/aly__dixon-haleyosomething-svpino | huggingtweets | 2021-09-26T12:49:07Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/aly__dixon-haleyosomething-svpino/1632660543535/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1416541994952937474/yi5cJxnq_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1368667185879584770/pKNxJut-_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1393327649318076417/cQWDVv-q_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">haley o'shaughnessy & Santiago & Aly Dixon</div>
<div style="text-align: center; font-size: 14px;">@aly__dixon-haleyosomething-svpino</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from haley o'shaughnessy & Santiago & Aly Dixon.
| Data | haley o'shaughnessy | Santiago | Aly Dixon |
| --- | --- | --- | --- |
| Tweets downloaded | 3241 | 3250 | 3003 |
| Retweets | 430 | 7 | 426 |
| Short tweets | 460 | 316 | 195 |
| Tweets kept | 2351 | 2927 | 2382 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1mt8xsda/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aly__dixon-haleyosomething-svpino's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/31g4nsgq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/31g4nsgq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/aly__dixon-haleyosomething-svpino')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Davlan/mbart50-large-yor-eng-mt | Davlan | 2021-09-26T12:40:29Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2103.08647",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mbart50-large-yor-eng-mt
## Model description
**mbart50-large-yor-eng-mt** is a **machine translation** model from Yorùbá language to English language based on a fine-tuned facebook/mbart-large-50 model. It establishes a **strong baseline** for automatically translating texts from Yorùbá to English.
Specifically, this model is a *mbart-large-50* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt). The model was trained using Swahili(sw_KE) as the language since the pre-trained model does not initially support Yorùbá. Thus, you need to use the sw_KE for language code when evaluating the model.
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning mbart50-large achieves **15.88 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 15.57
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/mbart50-large-eng-yor-mt | Davlan | 2021-09-26T11:57:50Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2103.08647",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mbart50-large-eng-yor-mt
## Model description
**mbart50-large-eng-yor-mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned facebook/mbart-large-50 model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá.
Specifically, this model is a *mbart-large-50* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt). The model was trained using Swahili(sw_KE) as the language since the pre-trained model does not initially support Yorùbá. Thus, you need to use the sw_KE for language code when evaluating the model.
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning mbarr50-large achieves **13.39 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 9.82
### BibTeX entry and citation info
By David Adelani
```
```
|
kloon99/KML_Software_License_v1 | kloon99 | 2021-09-26T10:44:14Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | {'C0': 'audit_rights',
'C1': 'licensee_indemnity',
'C2': 'licensor_indemnity',
'C3': 'license_grant',
'C4': 'eula_others',
'C5': 'licensee_infringement_indemnity',
'C6': 'licensor_exemption_liability',
'C7': 'licensor_limit_liabilty',
'C8': 'software_warranty'} |
suwani/try_connll-finetuned-ner | suwani | 2021-09-26T02:54:59Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: try_connll-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9283102493074792
- name: Recall
type: recall
value: 0.9372413021590782
- name: F1
type: f1
value: 0.9327543976842575
- name: Accuracy
type: accuracy
value: 0.9840818466328817
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# try_connll-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0596
- Precision: 0.9283
- Recall: 0.9372
- F1: 0.9328
- Accuracy: 0.9841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2383 | 1.0 | 878 | 0.0691 | 0.9139 | 0.9239 | 0.9189 | 0.9810 |
| 0.0497 | 2.0 | 1756 | 0.0607 | 0.9200 | 0.9343 | 0.9271 | 0.9833 |
| 0.0303 | 3.0 | 2634 | 0.0596 | 0.9283 | 0.9372 | 0.9328 | 0.9841 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
huggingtweets/veganseltzer | huggingtweets | 2021-09-25T22:38:44Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/veganseltzer/1632609483096/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1315429459663745024/S9mAz-Cs_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Senior beer disposal agent</div>
<div style="text-align: center; font-size: 14px;">@veganseltzer</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Senior beer disposal agent.
| Data | Senior beer disposal agent |
| --- | --- |
| Tweets downloaded | 1248 |
| Retweets | 477 |
| Short tweets | 108 |
| Tweets kept | 663 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/18bbz1me/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @veganseltzer's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/32xde3yh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/32xde3yh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/veganseltzer')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
emil2000/dialogpt-for-french-language | emil2000 | 2021-09-25T21:50:35Z | 62 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"fr",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language:
- fr
tags:
- {fr}
- {gpt2}
---
This model aims at being a french conversational agent. This consists of a fine-tuning of Dialo-GPT for french language. The dataset used gathers 36k conversations extracted from books, movies, interviews and dialogues for learning french.
More details about the model can be found [there](https://github.com/emil2000dza/DialoGPT-fine-tuned-for-french-language)
|
Hate-speech-CNERG/deoffxlmr-mono-malyalam | Hate-speech-CNERG | 2021-09-25T14:01:42Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"ml",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language: ml
license: apache-2.0
---
This model is used to detect **Offensive Content** in **Malayalam Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Malayalam(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.97, Ensemble - 0.97)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)".
***Please cite our paper in any published work that uses any of these resources.***
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38",
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~ |
Hate-speech-CNERG/dehatebert-mono-spanish | Hate-speech-CNERG | 2021-09-25T14:00:12Z | 136 | 8 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"es",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language: es
license: apache-2.0
---
This model is used detecting **hatespeech** in **Spanish language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.740287 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Hate-speech-CNERG/deoffxlmr-mono-tamil | Hate-speech-CNERG | 2021-09-25T13:59:19Z | 166 | 1 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"ta",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language: ta
license: apache-2.0
---
This model is used to detect **Offensive Content** in **Tamil Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Tamil(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.76, Ensemble - 0.78)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)".
***Please cite our paper in any published work that uses any of these resources.***
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38",
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~ |
Hate-speech-CNERG/dehatebert-mono-italian | Hate-speech-CNERG | 2021-09-25T13:56:50Z | 35 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"it",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language: it
license: apache-2.0
---
This model is used detecting **hatespeech** in **Italian language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.837288 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-arabic | Hate-speech-CNERG | 2021-09-25T13:54:53Z | 2,563 | 2 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"ar",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language: ar
license: apache-2.0
---
This model is used detecting **hatespeech** in **Arabic language**. The mono in the name refers to the monolingual setting, where the model is trained using only Arabic language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.877609 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Hate-speech-CNERG/dehatebert-mono-french | Hate-speech-CNERG | 2021-09-25T13:51:14Z | 158 | 4 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"fr",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language: fr
license: apache-2.0
---
This model is used detecting **hatespeech** in **French language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.692094 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
huggingtweets/sixjay__ | huggingtweets | 2021-09-25T11:43:57Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/sixjay__/1632570148333/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1434204311505055754/Ozub-Lmd_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">joj</div>
<div style="text-align: center; font-size: 14px;">@sixjay__</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from joj.
| Data | joj |
| --- | --- |
| Tweets downloaded | 2494 |
| Retweets | 508 |
| Short tweets | 429 |
| Tweets kept | 1557 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wcyvex9s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sixjay__'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6yf1o7q5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6yf1o7q5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sixjay__')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
pritoms/distilgpt2-finetuned-irll2 | pritoms | 2021-09-25T11:34:01Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: distilgpt2-finetuned-irll2
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-irll2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 12 | 4.2919 |
| No log | 2.0 | 24 | 4.2158 |
| No log | 3.0 | 36 | 4.1925 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
huggingtweets/postgohst | huggingtweets | 2021-09-24T22:10:56Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/postgohst/1632521452929/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1151812292889047040/BHktVZLN_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Connoise: FALSE GOD SYSTEMS @ONLINE🖤</div>
<div style="text-align: center; font-size: 14px;">@postgohst</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Connoise: FALSE GOD SYSTEMS @ONLINE🖤.
| Data | Connoise: FALSE GOD SYSTEMS @ONLINE🖤 |
| --- | --- |
| Tweets downloaded | 3191 |
| Retweets | 319 |
| Short tweets | 387 |
| Tweets kept | 2485 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1shgunpl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @postgohst's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3dybkr8z) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3dybkr8z/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/postgohst')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
superb/superb-test-org__test-submission-with-example-expert__d609b3c32044e50e3d5e9067bd97af1b42f04b0e | superb | 2021-09-24T19:49:31Z | 0 | 0 | null | [
"tensorboard",
"library:s3prl",
"benchmark:superb",
"type:model",
"dataset:superb",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
datasets:
- superb
tags:
- library:s3prl
- benchmark:superb
- type:model
---
# Fine-tuned s3prl model
Upstream Model: superb-test-org/test-submission-with-example-expert
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
|
superb/superb-test-org__test-submission-with-weights__2323d47e588aa02648ac1770568eeaa203431535 | superb | 2021-09-24T17:00:43Z | 0 | 1 | null | [
"tensorboard",
"library:s3prl",
"benchmark:superb",
"type:model",
"dataset:superb",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
datasets:
- superb
tags:
- library:s3prl
- benchmark:superb
- type:model
---
# Fine-tuned s3prl model
Upstream Model: superb-test-org/test-submission-with-weights
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
|
huggingtweets/elonmusk-lynaldencontact-naval | huggingtweets | 2021-09-24T13:58:13Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/elonmusk-lynaldencontact-naval/1632491889977/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1438003019887611905/MnOz3sOj_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1256841238298292232/ycqwaMI2_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1366094142405812234/LCWXc4QQ_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Naval & Lyn Alden</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-lynaldencontact-naval</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Naval & Lyn Alden.
| Data | Elon Musk | Naval | Lyn Alden |
| --- | --- | --- | --- |
| Tweets downloaded | 300 | 3246 | 3243 |
| Retweets | 24 | 156 | 260 |
| Short tweets | 79 | 643 | 192 |
| Tweets kept | 197 | 2447 | 2791 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ec58t4s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-lynaldencontact-naval's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ce3knoeb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ce3knoeb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-lynaldencontact-naval')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Adi2K/Priv-Consent | Adi2K | 2021-09-24T12:53:04Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"eng",
"dataset:Adi2K/autonlp-data-Priv-Consent",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language: eng
widget:
- text: "You can control cookies and tracking tools. To learn how to manage how we - and our vendors - use cookies and other tracking tools, please click here."
datasets:
- Adi2K/autonlp-data-Priv-Consent
---
# Model
- Problem type: Binary Classification
- Model ID: 12592372
## Validation Metrics
- Loss: 0.23033875226974487
- Accuracy: 0.9138655462184874
- Precision: 0.9087136929460581
- Recall: 0.9201680672268907
- AUC: 0.9690346726926065
- F1: 0.9144050104384133
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Adi2K/autonlp-Priv-Consent-12592372
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Adi2K/autonlp-Priv-Consent-12592372", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Adi2K/autonlp-Priv-Consent-12592372", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
abdouaziiz/soraberta | abdouaziiz | 2021-09-24T11:31:32Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"language-model",
"wo",
"wolof",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: wo
tags:
- roberta
- language-model
- wo
- wolof
---
# Soraberta: Unsupervised Language Model Pre-training for Wolof
**Soraberta** is pretrained roberta-base model on wolof language . Roberta was introduced in [this paper](https://arxiv.org/abs/1907.11692)
## Soraberta models
| Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters |
| :------: | :---: | :---: | :---: | :---: |
| `soraberta-base` | 6 | 12 | 514 | 83 M |
## Using Soraberta with Hugging Face's Transformers
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='abdouaziiz/soraberta')
>>> unmasker("juroom naari jullit man nanoo boole jend aw nag walla <mask>.")
[{'sequence': 'juroom naari jullit man nanoo boole jend aw nag walla gileem.',
'score': 0.9783930778503418,
'token': 4621,
'token_str': ' gileem'},
{'sequence': 'juroom naari jullit man nanoo boole jend aw nag walla jend.',
'score': 0.009271537885069847,
'token': 2155,
'token_str': ' jend'},
{'sequence': 'juroom naari jullit man nanoo boole jend aw nag walla aw.',
'score': 0.0027585660573095083,
'token': 704,
'token_str': ' aw'},
{'sequence': 'juroom naari jullit man nanoo boole jend aw nag walla pel.',
'score': 0.001120452769100666,
'token': 1171,
'token_str': ' pel'},
{'sequence': 'juroom naari jullit man nanoo boole jend aw nag walla juum.',
'score': 0.0005133090307936072,
'token': 5820,
'token_str': ' juum'}]
```
## Training data
The data sources are [Bible OT](http://biblewolof.com/) , [WOLOF-ONLINE](http://www.wolof-online.com/)
## Contact
Please contact [email protected] for any question, feedback or request. |
bionlp/bluebert_pubmed_mimic_uncased_L-24_H-1024_A-16 | bionlp | 2021-09-24T07:46:34Z | 57 | 4 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"bluebert",
"en",
"dataset:PubMed",
"dataset:MIMIC-III",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- bert
- bluebert
license: cc0-1.0
datasets:
- PubMed
- MIMIC-III
---
# BlueBert-Base, Uncased, PubMed and MIMIC-III
## Model description
A BERT model pre-trained on PubMed abstracts and clinical notes ([MIMIC-III](https://mimic.physionet.org/)).
## Intended uses & limitations
#### How to use
Please see https://github.com/ncbi-nlp/bluebert
## Training data
We provide [preprocessed PubMed texts](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/pubmed_uncased_sentence_nltk.txt.tar.gz) that were used to pre-train the BlueBERT models.
The corpus contains ~4000M words extracted from the [PubMed ASCII code version](https://www.ncbi.nlm.nih.gov/research/bionlp/APIs/BioC-PubMed/).
Pre-trained model: https://huggingface.co/bert-large-uncased
## Training procedure
* lowercasing the text
* removing speical chars `\x00`-`\x7F`
* tokenizing the text using the [NLTK Treebank tokenizer](https://www.nltk.org/_modules/nltk/tokenize/treebank.html)
Below is a code snippet for more details.
```python
value = value.lower()
value = re.sub(r'[\r\n]+', ' ', value)
value = re.sub(r'[^\x00-\x7F]+', ' ', value)
tokenized = TreebankWordTokenizer().tokenize(value)
sentence = ' '.join(tokenized)
sentence = re.sub(r"\s's\b", "'s", sentence)
```
### BibTeX entry and citation info
```bibtex
@InProceedings{peng2019transfer,
author = {Yifan Peng and Shankai Yan and Zhiyong Lu},
title = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets},
booktitle = {Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019)},
year = {2019},
pages = {58--65},
}
```
### Acknowledgments
This work was supported by the Intramural Research Programs of the National Institutes of Health, National Library of
Medicine and Clinical Center. This work was supported by the National Library of Medicine of the National Institutes of Health under award number 4R00LM013001-01.
We are also grateful to the authors of BERT and ELMo to make the data and codes publicly available.
We would like to thank Dr Sun Kim for processing the PubMed texts.
### Disclaimer
This tool shows the results of research conducted in the Computational Biology Branch, NCBI. The information produced
on this website is not intended for direct diagnostic use or medical decision-making without review and oversight
by a clinical professional. Individuals should not change their health behavior solely on the basis of information
produced on this website. NIH does not independently verify the validity or utility of the information produced
by this tool. If you have questions about the information produced on this website, please see a health care
professional. More information about NCBI's disclaimer policy is available.
|
bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12 | bionlp | 2021-09-24T07:45:33Z | 3,172 | 5 | transformers | [
"transformers",
"pytorch",
"bluebert",
"en",
"dataset:pubmed",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- bluebert
license: cc0-1.0
datasets:
- pubmed
---
# BlueBert-Base, Uncased, PubMed
## Model description
A BERT model pre-trained on PubMed abstracts
## Intended uses & limitations
#### How to use
Please see https://github.com/ncbi-nlp/bluebert
## Training data
We provide [preprocessed PubMed texts](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/pubmed_uncased_sentence_nltk.txt.tar.gz) that were used to pre-train the BlueBERT models.
The corpus contains ~4000M words extracted from the [PubMed ASCII code version](https://www.ncbi.nlm.nih.gov/research/bionlp/APIs/BioC-PubMed/).
Pre-trained model: https://huggingface.co/bert-base-uncased
## Training procedure
* lowercasing the text
* removing speical chars `\x00`-`\x7F`
* tokenizing the text using the [NLTK Treebank tokenizer](https://www.nltk.org/_modules/nltk/tokenize/treebank.html)
Below is a code snippet for more details.
```python
value = value.lower()
value = re.sub(r'[\r\n]+', ' ', value)
value = re.sub(r'[^\x00-\x7F]+', ' ', value)
tokenized = TreebankWordTokenizer().tokenize(value)
sentence = ' '.join(tokenized)
sentence = re.sub(r"\s's\b", "'s", sentence)
```
### BibTeX entry and citation info
```bibtex
@InProceedings{peng2019transfer,
author = {Yifan Peng and Shankai Yan and Zhiyong Lu},
title = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets},
booktitle = {Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019)},
year = {2019},
pages = {58--65},
}
```
|
hakurei/gpt-j-random-tinier | hakurei | 2021-09-24T06:21:52Z | 16 | 2 | transformers | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | This model has been initialized with random values. It is supposed to be used for the purpose of debugging. |
huggingartists/logic | huggingartists | 2021-09-23T20:36:21Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/logic",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/logic
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/0f975524d106026e89de983689d007c4.900x900x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Logic</div>
<a href="https://genius.com/artists/logic">
<div style="text-align: center; font-size: 14px;">@logic</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Logic.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/logic).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/logic")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2rp89nd3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Logic's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/25a9752b) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/25a9752b/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/logic')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/logic")
model = AutoModelWithLMHead.from_pretrained("huggingartists/logic")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
toloka/t5-large-for-text-aggregation | toloka | 2021-09-23T16:40:58Z | 16 | 7 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"text aggregation",
"summarization",
"en",
"dataset:toloka/CrowdSpeech",
"arxiv:1910.10683",
"arxiv:2107.01091",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- text aggregation
- summarization
license: apache-2.0
datasets:
- toloka/CrowdSpeech
metrics:
- wer
---
# T5 Large for Text Aggregation
## Model description
This is a T5 Large fine-tuned for crowdsourced text aggregation tasks. The model takes multiple performers' responses and yields a single aggregated response. This approach was introduced for the first time during [VLDB 2021 Crowd Science Challenge](https://crowdscience.ai/challenges/vldb21) and originally implemented at the second-place competitor's [GitHub](https://github.com/A1exRey/VLDB2021_workshop_t5). The [paper](http://ceur-ws.org/Vol-2932/short2.pdf) describing this model was presented at the [2nd Crowd Science Workshop](https://crowdscience.ai/conference_events/vldb21).
## How to use
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, AutoConfig
mname = "toloka/t5-large-for-text-aggregation"
tokenizer = AutoTokenizer.from_pretrained(mname)
model = AutoModelForSeq2SeqLM.from_pretrained(mname)
input = "samplee text | sampl text | sample textt"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # sample text
```
## Training data
Pretrained weights were taken from the [original](https://huggingface.co/t5-large) T5 Large model by Google. For more details on the T5 architecture and training procedure see https://arxiv.org/abs/1910.10683
Model was fine-tuned on `train-clean`, `dev-clean` and `dev-other` parts of the [CrowdSpeech](https://huggingface.co/datasets/toloka/CrowdSpeech) dataset that was introduced in [our paper](https://openreview.net/forum?id=3_hgF1NAXU7&referrer=%5BAuthor%20Console%5D(%2Fgroup%3Fid%3DNeurIPS.cc%2F2021%2FTrack%2FDatasets_and_Benchmarks%2FRound1%2FAuthors%23your-submissions).
## Training procedure
The model was fine-tuned for eight epochs directly following the HuggingFace summarization training [example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization).
## Eval results
Dataset | Split | WER
-----------|------------|----------
CrowdSpeech| test-clean | 4.99
CrowdSpeech| test-other | 10.61
### BibTeX entry and citation info
```bibtex
@inproceedings{Pletenev:21,
author = {Pletenev, Sergey},
title = {{Noisy Text Sequences Aggregation as a Summarization Subtask}},
year = {2021},
booktitle = {Proceedings of the 2nd Crowd Science Workshop: Trust, Ethics, and Excellence in Crowdsourced Data Management at Scale},
pages = {15--20},
address = {Copenhagen, Denmark},
issn = {1613-0073},
url = {http://ceur-ws.org/Vol-2932/short2.pdf},
language = {english},
}
```
```bibtex
@misc{pavlichenko2021vox,
title={Vox Populi, Vox DIY: Benchmark Dataset for Crowdsourced Audio Transcription},
author={Nikita Pavlichenko and Ivan Stelmakh and Dmitry Ustalov},
year={2021},
eprint={2107.01091},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
|
tanmoyio/wav2vec2-large-xlsr-bengali | tanmoyio | 2021-09-23T16:39:27Z | 1,078 | 3 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"dataset:OpenSLR",
"license:cc-by-sa-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: Bengali
datasets:
- OpenSLR
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: cc-by-sa-4.0
model-index:
- name: XLSR Wav2Vec2 Bengali by Tanmoy Sarkar
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR
type: OpenSLR
args: ben
metrics:
- name: Test WER
type: wer
value: 88.58
---
# Wav2Vec2-Large-XLSR-Bengali
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) Bengali using the [Bengali ASR training data set containing ~196K utterances](https://www.openslr.org/53/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
Dataset must be downloaded from [this website](https://www.openslr.org/53/) and preprocessed accordingly. For example 1250 test samples has been chosen.
```python
import pandas as pd
test_dataset = pd.read_csv('utt_spk_text.tsv', sep='\\t', header=None)[60000:61250]
test_dataset.columns = ["audio_path", "__", "label"]
test_dataset = test_data.drop("__", axis=1)
def add_file_path(text):
path = "data/" + text[:2] + "/" + text + '.flac'
return path
test_dataset['audio_path'] = test_dataset['audio_path'].map(lambda x: add_file_path(x))
```
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali")
model = Wav2Vec2ForCTC.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["audio_path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["label"][:2])
```
## Evaluation
The model can be evaluated as follows on the Bengali test data of OpenSLR.
```python
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali")
model = Wav2Vec2ForCTC.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["label"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 88.58 %
## Training
The script used for training can be found [Bengali ASR Fine Tuning Wav2Vec2](https://colab.research.google.com/drive/1Bkc5C_cJV9BeS0FD0MuHyayl8hqcbdRZ?usp=sharing)
|
skt/ko-gpt-trinity-1.2B-v0.5 | skt | 2021-09-23T16:29:25Z | 628 | 43 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"gpt3",
"ko",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: ko
tags:
- gpt3
license: cc-by-nc-sa-4.0
---
# Ko-GPT-Trinity 1.2B (v0.5)
## Model Description
Ko-GPT-Trinity 1.2B is a transformer model designed using SK telecom's replication of the GPT-3 architecture. Ko-GPT-Trinity refers to the class of models, while 1.2B represents the number of parameters of this particular pre-trained model.
### Model date
May 2021
### Model type
Language model
### Model version
1.2 billion parameter model
## Training data
Ko-GPT-Trinity 1.2B was trained on Ko-DAT, a large scale curated dataset created by SK telecom for the purpose of training this model.
## Training procedure
This model was trained on ko-DAT for 35 billion tokens over 72,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss.
## Intended Use and Limitations
The model learns an inner representation of the Korean language that can then be used to extract features useful for downstream tasks. The model excels at generating texts from a prompt, which was the pre-training objective.
### Limitations and Biases
Ko-GPT-Trinity was trained on Ko-DAT, a dataset known to contain profanity, lewd, politically charged, and otherwise abrasive language. As such, Ko-GPT-Trinity may produce socially unacceptable text. As with all language models, it is hard to predict in advance how Ko-GPT-Trinity will respond to particular prompts and offensive content may occur without warning.
Ko-GPT-Trinity was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, this is an active area of ongoing research. Known limitations include the following:
Predominantly Korean: Ko-GPT-Trinity was trained largely on text in the Korean language, and is best suited for classifying, searching, summarizing, or generating such text. Ko-GPT-Trinity will by default perform worse on inputs that are different from the data distribution it is trained on, including non-Korean languages as well as specific dialects of Korean that are not as well-represented in training data.
Interpretability & predictability: the capacity to interpret or predict how Ko-GPT-Trinity will behave is very limited, a limitation common to most deep learning systems, especially in models of this scale.
High variance on novel inputs: Ko-GPT-Trinity is not necessarily well-calibrated in its predictions on novel inputs. This can be observed in the much higher variance in its performance as compared to that of humans on standard benchmarks.
## Eval results
### Reasoning
| Model and Size | BoolQ | CoPA | WiC |
| ----------------------- | --------- | ---------- | --------- |
| **Ko-GPT-Trinity 1.2B** | **71.77** | **68.66** | **78.73** |
| KoElectra-base | 65.17 | 67.56 | 77.27 |
| KoBERT-base | 55.97 | 62.24 | 77.60 |
## Where to send questions or comments about the model
Please contact [Eric] ([email protected])
|
persiannlp/wikibert-base-parsinlu-entailment | persiannlp | 2021-09-23T16:20:55Z | 19 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"entailment",
"wikibert",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- entailment
- wikibert
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Textual Entailment (مدل برای پاسخ به استلزام منطقی)
This is a model for textual entailment problems.
Here is an example of how you can run this model:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import numpy as np
labels = ["entails", "contradicts", "neutral"]
model_name_or_path = "persiannlp/wikibert-base-parsinlu-entailment"
model = AutoModelForSequenceClassification.from_pretrained(model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path,)
def model_predict(text_a, text_b):
features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt')
output = model(**features)
logits = output[0]
probs = torch.nn.functional.softmax(logits, dim=1).tolist()
idx = np.argmax(np.array(probs))
print(labels[idx], probs)
model_predict(
"این مسابقات بین آوریل و دسامبر در هیپودروم ولیفندی در نزدیکی باکرکی ، ۱۵ کیلومتری (۹ مایل) غرب استانبول برگزار می شود.",
"در ولیفندی هیپودروم، مسابقاتی از آوریل تا دسامبر وجود دارد."
)
model_predict(
"آیا کودکانی وجود دارند که نیاز به سرگرمی دارند؟",
"هیچ کودکی هرگز نمی خواهد سرگرم شود.",
)
model_predict(
"ما به سفرهایی رفته ایم که در نهرهایی شنا کرده ایم",
"علاوه بر استحمام در نهرها ، ما به اسپا ها و سونا ها نیز رفته ایم."
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/parsbert-base-parsinlu-entailment | persiannlp | 2021-09-23T16:20:50Z | 20 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"entailment",
"parsbert",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- entailment
- parsbert
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Textual Entailment (مدل برای پاسخ به استلزام منطقی)
This is a model for textual entailment problems.
Here is an example of how you can run this model:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import numpy as np
labels = ["entails", "contradicts", "neutral"]
model_name_or_path = "persiannlp/parsbert-base-parsinlu-entailment"
model = AutoModelForSequenceClassification.from_pretrained(model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path,)
def model_predict(text_a, text_b):
features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt')
output = model(**features)
logits = output[0]
probs = torch.nn.functional.softmax(logits, dim=1).tolist()
idx = np.argmax(np.array(probs))
print(labels[idx], probs)
model_predict(
"این مسابقات بین آوریل و دسامبر در هیپودروم ولیفندی در نزدیکی باکرکی ، ۱۵ کیلومتری (۹ مایل) غرب استانبول برگزار می شود.",
"در ولیفندی هیپودروم، مسابقاتی از آوریل تا دسامبر وجود دارد."
)
model_predict(
"آیا کودکانی وجود دارند که نیاز به سرگرمی دارند؟",
"هیچ کودکی هرگز نمی خواهد سرگرم شود.",
)
model_predict(
"ما به سفرهایی رفته ایم که در نهرهایی شنا کرده ایم",
"علاوه بر استحمام در نهرها ، ما به اسپا ها و سونا ها نیز رفته ایم."
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-small-parsinlu-translation_en_fa | persiannlp | 2021-09-23T16:20:48Z | 705 | 3 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"machine-translation",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- machine-translation
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- sacrebleu
---
# Machine Translation (ترجمهی ماشینی)
This is an mT5-based model for machine translation (English -> Persian).
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "small"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-translation_en_fa"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("Praise be to Allah, the Cherisher and Sustainer of the worlds;")
run_model("shrouds herself in white and walks penitentially disguised as brotherly love through factories and parliaments; offers help, but desires power;")
run_model("He thanked all fellow bloggers and organizations that showed support.")
run_model("Races are held between April and December at the Veliefendi Hippodrome near Bakerky, 15 km (9 miles) west of Istanbul.")
run_model("I want to pursue PhD in Computer Science about social network,what is the open problem in social networks?")
```
which should output:
```
['برای الله، یعنی چرنده و سوزان دنیا، تحسین کنید']
['خودش را در سفید پوسته می کند و به صورت عشق برادرانه']
['او از تمام بلاگرها و سازمان هایی که حمایتشان را نشان می داد']
['در طول ماه آوریل و دسامبر در والی فیودورونا نزدیک بیکر']
['من می خواهم در مورد شبکه اجتماعی تحقیقات علوم کامپیوتری را دن']
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-small-parsinlu-snli-entailment | persiannlp | 2021-09-23T16:20:43Z | 52 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"entailment",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:snli",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- entailment
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- snli
metrics:
- accuracy
---
# Textual Entailment (مدل برای پاسخ به استلزام منطقی)
This is a model for textual entailment problems.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size="small"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-snli-entailment"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(premise, hypothesis, **generator_args):
input_ids = tokenizer.encode(f"{premise}<sep>{hypothesis}", return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"این مسابقات بین آوریل و دسامبر در هیپودروم ولیفندی در نزدیکی باکرکی ، ۱۵ کیلومتری (۹ مایل) غرب استانبول برگزار می شود.",
"در ولیفندی هیپودروم، مسابقاتی از آوریل تا دسامبر وجود دارد."
)
run_model(
"آیا کودکانی وجود دارند که نیاز به سرگرمی دارند؟",
"هیچ کودکی هرگز نمی خواهد سرگرم شود.",
)
run_model(
"ما به سفرهایی رفته ایم که در نهرهایی شنا کرده ایم",
"علاوه بر استحمام در نهرها ، ما به اسپا ها و سونا ها نیز رفته ایم."
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-small-parsinlu-qqp-query-paraphrasing | persiannlp | 2021-09-23T16:20:38Z | 29 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"query-paraphrasing",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:qqp",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- query-paraphrasing
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- qqp
metrics:
- accuracy
---
# Detection of Paraphrased Queries (تشخصیص سوالات هممعنی)
This is a model for detection of paraphrased queries.
Here is an example of how you can run this model:
```python
from transformers import MT5Config, MT5ForConditionalGeneration, MT5Tokenizer
model_name = "persiannlp/mt5-small-parsinlu-qqp-query-paraphrasing"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(q1, q2, **generator_args):
input_ids = tokenizer.encode(f"{q1}<sep>{q2}", return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("چه چیزی باعث پوکی استخوان می شود؟", "چه چیزی باعث مقاومت استخوان در برابر ضربه می شود؟")
run_model("من دارم به این فکر میکنم چرا ساعت هفت نمیشه؟", "چرا من ساده فکر میکردم به عشقت پابندی؟")
run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟")
run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟")
run_model("شناسنامه در چه سالی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟")
run_model("سیب زمینی چه زمانی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-small-parsinlu-arc-comqa-obqa-multiple-choice | persiannlp | 2021-09-23T16:20:31Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"multiple-choice",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:commonsenseqa",
"dataset:arc",
"dataset:openbookqa",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2022-03-02T23:29:05Z | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- commonsenseqa
- arc
- openbookqa
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "small"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-arc-comqa-obqa-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-large-parsinlu-translation_en_fa | persiannlp | 2021-09-23T16:20:29Z | 580 | 2 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"machine-translation",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- machine-translation
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- sacrebleu
---
# Machine Translation (ترجمهی ماشینی)
This is an mT5-based model for machine translation (English -> Persian).
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "large"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-translation_en_fa"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("Praise be to Allah, the Cherisher and Sustainer of the worlds;")
run_model("shrouds herself in white and walks penitentially disguised as brotherly love through factories and parliaments; offers help, but desires power;")
run_model("He thanked all fellow bloggers and organizations that showed support.")
run_model("Races are held between April and December at the Veliefendi Hippodrome near Bakerky, 15 km (9 miles) west of Istanbul.")
run_model("I want to pursue PhD in Computer Science about social network,what is the open problem in social networks?")
```
which should output:
```
['خدا را شکر که آفریننده و نگهدار جهان است.']
['خود را با کفن سفید می پوشد و به شکل برادرانه ای در']
['او از همه ی وبلاگ نویسان و سازمان هایی که از او حمایت کردند']
['مسابقات بین آوریل و دسامبر در فرودگاه والی عبدین نزدیک بی']
['من می خواهم پایان نامه دکتری را در رشته علوم کامپیوتر در']
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-large-parsinlu-opus-translation_fa_en | persiannlp | 2021-09-23T16:20:17Z | 184 | 1 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"machine-translation",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- machine-translation
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- sacrebleu
---
# Machine Translation (ترجمهی ماشینی)
This is an mT5-based model for machine translation (Persian -> English).
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "large"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-opus-translation_fa_en"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("ستایش خدای را که پروردگار جهانیان است.")
run_model("در هاید پارک کرنر بر گلدانی ایستاده موعظه میکند؛")
run_model("وی از تمامی بلاگرها، سازمانها و افرادی که از وی پشتیبانی کردهاند، تشکر کرد.")
run_model("مشابه سال ۲۰۰۱، تولید آمونیاک بی آب در ایالات متحده در سال ۲۰۰۰ تقریباً ۱۷،۴۰۰،۰۰۰ تن (معادل بدون آب) با مصرف ظاهری ۲۲،۰۰۰،۰۰۰ تن و حدود ۴۶۰۰۰۰۰ با واردات خالص مواجه شد. ")
run_model("می خواهم دکترای علوم کامپیوتر راجع به شبکه های اجتماعی را دنبال کنم، چالش حل نشده در شبکه های اجتماعی چیست؟")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-large-parsinlu-multiple-choice | persiannlp | 2021-09-23T16:20:14Z | 63 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"multiple-choice",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2022-03-02T23:29:05Z | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "large"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-base-parsinlu-sentiment-analysis | persiannlp | 2021-09-23T16:20:02Z | 94 | 4 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"sentiment",
"sentiment-analysis",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- sentiment
- sentiment-analysis
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Sentiment Analysis (آنالیز احساسات)
This is a mT5 model for sentiment analysis.
Here is an example of how you can run this model:
```python
import torch
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
import numpy as np
model_name_or_path = "persiannlp/mt5-base-parsinlu-sentiment-analysis"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def model_predict(text_a, text_b):
features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt')
output = model(**features)
logits = output[0]
probs = torch.nn.functional.softmax(logits, dim=1).tolist()
idx = np.argmax(np.array(probs))
print(labels[idx], probs)
def run_model(context, query, **generator_args):
input_ids = tokenizer.encode(context + "<sep>" + query, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"یک فیلم ضعیف بی محتوا بدون فیلمنامه . شوخی های سخیف .",
"نظر شما در مورد داستان، فیلمنامه، دیالوگ ها و موضوع فیلم لونه زنبور چیست؟"
)
run_model(
"فیلم تا وسط فیلم یعنی دقیقا تا جایی که معلوم میشه بچه های املشی دنبال رضان خیلی خوب و جذاب پیش میره ولی دقیقا از همونجاش سکته میزنه و خلاص...",
"نظر شما به صورت کلی در مورد فیلم ژن خوک چیست؟"
)
run_model(
"اصلا به هیچ عنوان علاقه نداشتم اجرای می سی سی پی نشسته میمیرد روی پرده سینما ببینم دیالوگ های تکراری هلیکوپتر ماشین آلندلون لئون پاپیون آخه چرااااااااااااااا همون حسی که توی تالار وحدت بعد از نیم ساعت به سرم اومد امشب توی سالن سینما تجربه کردم ،حس گریز از سالن....... (ノಠ益ಠ)ノ ",
" نظر شما در مورد صداگذاری و جلوه های صوتی فیلم مسخرهباز چیست؟"
)
run_model(
" گول نخورید این رنگارنگ مینو نیست برای شرکت گرجیه و متاسفانه این محصولش اصلا مزه رنگارنگی که انتظار دارید رو نمیده ",
" نظر شما در مورد عطر، بو، و طعم این بیسکویت و ویفر چیست؟"
)
run_model(
"در مقایسه با سایر برندهای موجود در بازار با توجه به حراجی که داشت ارزانتر ب",
" شما در مورد قیمت و ارزش خرید این حبوبات و سویا چیست؟"
)
run_model(
"من پسرم عاشق ایناس ولی دیگه به خاطر حفظ محیط زیست فقط زمانهایی که مجبور باشم شیر دونه ای میخرم و سعی میکنم دیگه کمتر شیر با بسته بندی تتراپک استفاده کنم ",
"نظر شما به صورت کلی در مورد این شیر چیست؟"
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-base-parsinlu-qqp-query-paraphrasing | persiannlp | 2021-09-23T16:20:00Z | 55 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"query-paraphrasing",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:qqp",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- query-paraphrasing
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- qqp
metrics:
- accuracy
---
# Detection of Paraphrased Queries (تشخصیص سوالات هممعنی)
This is a model for detection of paraphrased queries.
Here is an example of how you can run this model:
```python
from transformers import MT5Config, MT5ForConditionalGeneration, MT5Tokenizer
model_name = "persiannlp/mt5-base-parsinlu-qqp-query-paraphrasing"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(q1, q2, **generator_args):
input_ids = tokenizer.encode(f"{q1}<sep>{q2}", return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("چه چیزی باعث پوکی استخوان می شود؟", "چه چیزی باعث مقاومت استخوان در برابر ضربه می شود؟")
run_model("من دارم به این فکر میکنم چرا ساعت هفت نمیشه؟", "چرا من ساده فکر میکردم به عشقت پابندی؟")
run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟")
run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟")
run_model("شناسنامه در چه سالی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟")
run_model("سیب زمینی چه زمانی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-base-parsinlu-opus-translation_fa_en | persiannlp | 2021-09-23T16:19:57Z | 6,058 | 7 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"machine-translation",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- machine-translation
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- sacrebleu
---
# Machine Translation (ترجمهی ماشینی)
This is an mT5-based model for machine translation (Persian -> English).
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "base"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-opus-translation_fa_en"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("ستایش خدای را که پروردگار جهانیان است.")
run_model("در هاید پارک کرنر بر گلدانی ایستاده موعظه میکند؛")
run_model("وی از تمامی بلاگرها، سازمانها و افرادی که از وی پشتیبانی کردهاند، تشکر کرد.")
run_model("مشابه سال ۲۰۰۱، تولید آمونیاک بی آب در ایالات متحده در سال ۲۰۰۰ تقریباً ۱۷،۴۰۰،۰۰۰ تن (معادل بدون آب) با مصرف ظاهری ۲۲،۰۰۰،۰۰۰ تن و حدود ۴۶۰۰۰۰۰ با واردات خالص مواجه شد. ")
run_model("می خواهم دکترای علوم کامپیوتر راجع به شبکه های اجتماعی را دنبال کنم، چالش حل نشده در شبکه های اجتماعی چیست؟")
```
which should give the following:
```
['the admiration of God, which is the Lord of the world.']
['At the Ford Park, the Crawford Park stands on a vase;']
['He thanked all the bloggers, the organizations, and the people who supported him']
['similar to the year 2001, the economy of ammonia in the United States in the']
['I want to follow the computer experts on social networks, what is the unsolved problem in']
```
which should give the following:
```
['Adoration of God, the Lord of the world.']
['At the High End of the Park, Conrad stands on a vase preaching;']
['She thanked all the bloggers, organizations, and men who had supported her.']
['In 2000, the lack of water ammonia in the United States was almost']
['I want to follow the computer science doctorate on social networks. What is the unsolved challenge']
```
Which should produce the following:
```
['the praise of God, the Lord of the world.']
['At the Hyde Park Corner, Carpenter is preaching on a vase;']
['He thanked all the bloggers, organizations, and people who had supported him.']
['Similarly in 2001, the production of waterless ammonia in the United States was']
['I want to pursue my degree in Computer Science on social networks, what is the']
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-base-parsinlu-multiple-choice | persiannlp | 2021-09-23T16:19:55Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"multiple-choice",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2022-03-02T23:29:05Z | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "base"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mt5-base-parsinlu-arc-comqa-obqa-multiple-choice | persiannlp | 2021-09-23T16:19:52Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"multiple-choice",
"mt5",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:commonsenseqa",
"dataset:arc",
"dataset:openbookqa",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2022-03-02T23:29:05Z | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- commonsenseqa
- arc
- openbookqa
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "base"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-arc-comqa-obqa-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
persiannlp/mbert-base-parsinlu-multiple-choice | persiannlp | 2021-09-23T16:19:49Z | 70 | 3 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"multiple-choice",
"mbert",
"persian",
"farsi",
"text-classification",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mbert
- persian
- farsi
pipeline_tag: text-classification
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mbert-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from typing import List
import torch
from transformers import AutoConfig, AutoModelForMultipleChoice, AutoTokenizer
model_name = "persiannlp/mbert-base-parsinlu-multiple-choice"
tokenizer = AutoTokenizer.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
model = AutoModelForMultipleChoice.from_pretrained(model_name, config=config)
def run_model(question: str, candicates: List[str]):
assert len(candicates) == 4, "you need four candidates"
choices_inputs = []
for c in candicates:
text_a = "" # empty context
text_b = question + " " + c
inputs = tokenizer(
text_a,
text_b,
add_special_tokens=True,
max_length=128,
padding="max_length",
truncation=True,
return_overflowing_tokens=True,
)
choices_inputs.append(inputs)
input_ids = torch.LongTensor([x["input_ids"] for x in choices_inputs])
output = model(input_ids=input_ids)
print(output)
return output
run_model(question="وسیع ترین کشور جهان کدام است؟", candicates=["آمریکا", "کانادا", "روسیه", "چین"])
run_model(question="طامع یعنی ؟", candicates=["آزمند", "خوش شانس", "محتاج", "مطمئن"])
run_model(
question="زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده ",
candicates=["روز اول", "روز دوم", "روز سوم", "هیچکدام"])
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
pere/norwegian-t5 | pere | 2021-09-23T16:19:43Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"summary",
"no",
"dataset:oscar",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: no
license: cc-by-4.0
tags:
- summary
datasets:
- oscar
widget:
- text: 'translate Bokmål to Nynorsk: Dette er en test!'
---
# Norwegian T5 - small - Oscar
## Description
This is a sample reference model trained only on the Oscar Corpus for a day on a TPU v3-8. Do not use this model as anything other than a simple reference point. |
pere/norwegian-t5-base-NCC | pere | 2021-09-23T16:19:38Z | 4 | 0 | transformers | [
"transformers",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"seq2seq",
"no",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: no
license: cc-by-4.0
tags:
- seq2seq
datasets:
- Norwegian Nynorsk/Bokmål
---
# 🇳🇴 Norwegian T5 Base model Trained on the NCC🇳🇴
This is a Norwegian T5-base model trained on the Norwegian Colossal Corpus (NCC) on a TPU v3-8. It needs to be finetuned on a specific task before being used for anything.
Currently the model is training. It is expected that it should be finished by the end of August 2021.
The following setting were used in training:
```bash
./run_t5_mlm_flax.py \
--output_dir="./" \
--model_type="t5" \
--config_name="./" \
--tokenizer_name="./" \
--train_file /mnt/disks/flaxdisk/corpus/norwegian_colossal_corpus_train.json \
--validation_file /mnt/disks/flaxdisk/corpus/norwegian_colossal_corpus_validation.json \
--max_seq_length="128" \
--weight_decay="0.01" \
--per_device_train_batch_size="128" \
--per_device_eval_batch_size="128" \
--learning_rate="8e-3" \
--warmup_steps="2000" \
--overwrite_output_dir \
--cache_dir /mnt/disks/flaxdisk/cache/ \
--num_train_epochs="3" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--logging_steps="100" \
--save_steps="2500" \
--eval_steps="2500" \
--preprocessing_num_workers 96 \
--adafactor \
--push_to_hub
``` |
pere/norwegian-mt5 | pere | 2021-09-23T16:19:28Z | 5 | 0 | transformers | [
"transformers",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"seq2seq",
"no",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: no
license: cc-by-4.0
tags:
- seq2seq
datasets:
- Norwegian Nynorsk/Bokmål
---
# 🇳🇴 Norwegian mT5 Base model 🇳🇴
This mT5-base model is trained from the mT5 checkpoint on a 19GB Balanced Bokmål-Nynorsk Corpus.
Parameters used in training:
```bash
python3 ./run_t5_mlm_flax_streaming.py
--model_name_or_path="./norwegian-t5-base"
--output_dir="./norwegian-t5-base"
--config_name="./norwegian-t5-base"
--tokenizer_name="./norwegian-t5-base"
--dataset_name="pere/nb_nn_balanced_shuffled"
--max_seq_length="512"
--per_device_train_batch_size="32"
--per_device_eval_batch_size="32"
--learning_rate="0.005"
--weight_decay="0.001"
--warmup_steps="2000"
--overwrite_output_dir
--logging_steps="100"
--save_steps="500"
--eval_steps="500"
--push_to_hub
--preprocessing_num_workers 96
--adafactor
``` |
pere/nb-nn-dev2 | pere | 2021-09-23T16:19:18Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"translation",
"no",
"dataset:oscar",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
language: no
license: cc-by-4.0
tags:
- translation
datasets:
- oscar
widget:
- text: Skriv inn en tekst som du ønsker å oversette til en annen målform.
---
# Norwegian T5 - Translation Bokmål Nynorsk - Development
## Description
This is the development version of the Bokmål-Nynorsk translator. If you want something that is stable, Please do run [this version](https://huggingface.co/pere/nb-nn-translation/) instead.
Here is an example of how to use the model from Python
```python
# Import libraries
from transformers import T5ForConditionalGeneration, AutoTokenizer
model = T5ForConditionalGeneration.from_pretrained('pere/nb-nn-dev',from_flax=True)
tokenizer = AutoTokenizer.from_pretrained('pere/nb-nn-dev')
#Encode the text
text = "Hun vil ikke gi bort sine personlige data."
inputs = tokenizer.encode(text, return_tensors="pt")
outputs = model.generate(inputs, max_length=255, num_beams=4, early_stopping=True)
#Decode and print the result
print(tokenizer.decode(outputs[0]))
```
Or if you like to use the pipeline instead
```python
# Set up the pipeline
from transformers import pipeline
translator = pipeline("translation", model='pere/nb-nn-dev')
# Do the translation
text = "Hun vil ikke gi bort sine personlige data."
print(translator(text, max_length=255))
```
|
osanseviero/corenlp_german | osanseviero | 2021-09-23T16:16:51Z | 0 | 0 | null | [
"corenlp",
"ge",
"license:gpl",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- corenlp
library_tag: corenlp
language:
- ge
license: gpl
---
# Core NLP model for ge
CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations.
Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP).
|
osanseviero/corenlp_english-kbp | osanseviero | 2021-09-23T16:16:46Z | 0 | 0 | null | [
"corenlp",
"en",
"license:gpl",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- corenlp
library_tag: corenlp
language:
- en
license: gpl
---
# Core NLP model for en
CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations.
Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP).
|
osanseviero/corenlp_english-extra | osanseviero | 2021-09-23T16:16:44Z | 0 | 0 | null | [
"corenlp",
"en",
"license:gpl",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- corenlp
library_tag: corenlp
language:
- en
license: gpl
---
# Core NLP model for en
CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations.
Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP).
|
osanseviero/ConvTasNet_Libri1Mix_enhsingle_16k | osanseviero | 2021-09-23T16:16:32Z | 0 | 0 | null | [
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"region:us"
] | audio-to-audio | 2022-03-02T23:29:05Z | ---
tags:
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-4.0
library_tag: generic
---
## Clone from Asteroid model `JorisCos/ConvTasNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 1
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 6
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 14.743051006476085
si_sdr_imp: 11.293269700616385
sdr: 15.300522933671061
sdr_imp: 11.797860134458015
sir: Infinity
sir_imp: NaN
sar: 15.300522933671061
sar_imp: 11.797860134458015
stoi: 0.9310514162434267
stoi_imp: 0.13513159270288563
```
License notice:
This work "ConvTasNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
mpariente/ConvTasNet_Libri1Mix_enhsingle_8k | mpariente | 2021-09-23T16:12:15Z | 19 | 1 | asteroid | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"dataset:LibriMix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- asteroid
- audio
- ConvTasNet
datasets:
- LibriMix
- enh_single
license: cc-by-sa-4.0
---
## Asteroid model
Imported from this Zenodo [model page](https://zenodo.org/record/3970768).
## Description:
This model was trained by Brij Mohan using the Librimix/ConvTasNet recipe in Asteroid.
It was trained on the `enh_single` task of the Libri3Mix dataset.
## Training config:
```yaml
data:
n_src: 1
sample_rate: 8000
segment: 3
task: enh_single
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 1
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: True
epochs: 200
half_lr: True
```
## Results:
```yaml
si_sdr: 14.783675142685572
si_sdr_imp: 11.464625198953202
sdr: 15.497505907983102
sdr_imp: 12.07230150154914
sar: 15.497505907983102
sar_imp: 12.07230150154914
stoi: 0.9270030254700518
stoi_imp: 0.1320547197597893
```
## License notice:
This work "ConvTasNet_Libri1Mix_enhsingle_8k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by
[Vassil Panayotov](https://github.com/vdp),
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
"ConvTasNet_Libri1Mix_enhsingle_8k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/)
by Manuel Pariente. |
mhu-coder/ConvTasNet_Libri1Mix_enhsingle | mhu-coder | 2021-09-23T16:10:04Z | 9 | 1 | asteroid | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:libri1mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"region:us"
] | audio-to-audio | 2022-03-02T23:29:05Z | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- libri1mix
- enh_single
license: cc-by-sa-4.0
---
## Asteroid model `mhu-coder/ConvTasNet_Libri1Mix_enhsingle`
Imported from [Zenodo](https://zenodo.org/record/4301955#.X9cj98Jw0bY)
### Description:
This model was trained by Mathieu Hu using the librimix/ConvTasNet recipe in
[Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
### Training config:
```yaml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-100
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
main_args:
exp_dir: exp/train_convtasnet_f34664b9
help: None
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 1
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
positional arguments:
training:
batch_size: 2
early_stop: True
epochs: 200
half_lr: True
num_workers: 4
```
### Results:
```yaml
si_sdr: 13.938355526049932
si_sdr_imp: 10.488574220190232
sdr: 14.567380104207393
sdr_imp: 11.064717304994337
sir: inf
sir_imp: nan
sar: 14.567380104207393
sar_imp: 11.064717304994337
stoi: 0.9201010933251715
stoi_imp: 0.1241812697846321
```
### License notice:
This work "ConvTasNet_Libri1Mx_enhsingle" is a derivative of [CSR-I (WSJ0) Complete](https://catalog.ldc.upenn.edu/LDC93S6A)
by [LDC](https://www.ldc.upenn.edu/), used under [LDC User Agreement for
Non-Members](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf) (Research only).
"ConvTasNet_Libri1Mix_enhsingle" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/)
by Mathieu Hu.
|
julien-c/fasttext-language-id | julien-c | 2021-09-23T16:04:33Z | 6,979 | 3 | fasttext | [
"fasttext",
"multilingual",
"dataset:wikipedia",
"dataset:tatoeba",
"dataset:setimes",
"arxiv:1607.01759",
"arxiv:1612.03651",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: multilingual
tags:
- fasttext
datasets:
- wikipedia
- tatoeba
- setimes
license: cc-by-sa-4.0
library_name: fasttext
inference: false
---
## FastText model for language identification
#### ♻️ Imported from https://fasttext.cc/docs/en/language-identification.html
> [1] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, Bag of Tricks for Efficient Text Classification
```bibtex
@article{joulin2016bag,
title={Bag of Tricks for Efficient Text Classification},
author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas},
journal={arXiv preprint arXiv:1607.01759},
year={2016}
}
```
> [2] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, FastText.zip: Compressing text classification models
```bibtex
@article{joulin2016fasttext,
title={FastText.zip: Compressing text classification models},
author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Douze, Matthijs and J{\'e}gou, H{\'e}rve and Mikolov, Tomas},
journal={arXiv preprint arXiv:1612.03651},
year={2016}
}
``` |
julien-c/DPRNNTasNet-ks16_WHAM_sepclean | julien-c | 2021-09-23T16:04:27Z | 72 | 2 | asteroid | [
"asteroid",
"pytorch",
"audio-to-audio",
"audio",
"audio-source-separation",
"dataset:wham",
"dataset:sep_clean",
"arxiv:2005.04132",
"license:cc-by-sa-4.0",
"region:us"
] | audio-to-audio | 2022-03-02T23:29:05Z | ---
tags:
- audio-to-audio
- asteroid
- audio
- audio-source-separation
datasets:
- wham
- sep_clean
license: cc-by-sa-4.0
---
## Asteroid model `mpariente/DPRNNTasNet(ks=16)_WHAM!_sepclean`
♻️ Imported from https://zenodo.org/record/3903795#.X8pMBRNKjUI
This model was trained by Manuel Pariente using the wham/DPRNN recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the sep_clean task of the WHAM! dataset.
### Demo: How to use in Asteroid
```python
# coming soon
```
### Training config
- data:
- mode: min
- nondefault_nsrc: None
- sample_rate: 8000
- segment: 2.0
- task: sep_clean
- train_dir: data/wav8k/min/tr
- valid_dir: data/wav8k/min/cv
- filterbank:
- kernel_size: 16
- n_filters: 64
- stride: 8
- main_args:
- exp_dir: exp/train_dprnn_ks16/
- help: None
- masknet:
- bidirectional: True
- bn_chan: 128
- chunk_size: 100
- dropout: 0
- hid_size: 128
- hop_size: 50
- in_chan: 64
- mask_act: sigmoid
- n_repeats: 6
- n_src: 2
- out_chan: 64
- optim:
- lr: 0.001
- optimizer: adam
- weight_decay: 1e-05
- positional arguments:
- training:
- batch_size: 6
- early_stop: True
- epochs: 200
- gradient_clipping: 5
- half_lr: True
- num_workers: 6
#### Results
- `si_sdr`: 18.227683982688003
- `si_sdr_imp`: 18.22883576588251
- `sdr`: 18.617789605060587
- `sdr_imp`: 18.466745426438173
- `sir`: 29.22773720052717
- `sir_imp`: 29.07669302190474
- `sar`: 19.116352171914485
- `sar_imp`: -130.06009796503054
- `stoi`: 0.9722025377865715
- `stoi_imp`: 0.23415680987800583
### Citing Asteroid
```BibTex
@inproceedings{Pariente2020Asteroid,
title={Asteroid: the {PyTorch}-based audio source separation toolkit for researchers},
author={Manuel Pariente and Samuele Cornell and Joris Cosentino and Sunit Sivasankaran and
Efthymios Tzinis and Jens Heitkaemper and Michel Olvera and Fabian-Robert Stöter and
Mathieu Hu and Juan M. Martín-Doñas and David Ditter and Ariel Frank and Antoine Deleforge
and Emmanuel Vincent},
year={2020},
booktitle={Proc. Interspeech},
}
```
Or on arXiv:
```bibtex
@misc{pariente2020asteroid,
title={Asteroid: the PyTorch-based audio source separation toolkit for researchers},
author={Manuel Pariente and Samuele Cornell and Joris Cosentino and Sunit Sivasankaran and Efthymios Tzinis and Jens Heitkaemper and Michel Olvera and Fabian-Robert Stöter and Mathieu Hu and Juan M. Martín-Doñas and David Ditter and Ariel Frank and Antoine Deleforge and Emmanuel Vincent},
year={2020},
eprint={2005.04132},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
``` |
JorisCos/ConvTasNet_Libri3Mix_sepnoisy_8k | JorisCos | 2021-09-23T15:49:10Z | 15 | 2 | asteroid | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri3Mix",
"dataset:sep_noisy",
"license:cc-by-sa-4.0",
"region:us"
] | audio-to-audio | 2022-03-02T23:29:04Z | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri3Mix
- sep_noisy
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_8k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 8000
segment: 3
task: sep_noisy
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri3Mix min test set :
```yml
si_sdr: 5.978836560066222
si_sdr_imp: 10.388889689413096
sdr: 6.8651365291740225
sdr_imp: 10.928018056925016
sir: 14.997089638783114
sir_imp: 18.08248357801549
sar: 8.127504792061933
sar_imp: -0.7869320540959925
stoi: 0.7669414686111115
stoi_imp: 0.20416563213078837
```
License notice:
This work "ConvTasNet_Libri3Mix_sepnoisy_8k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri3Mix_sepnoisy_8k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k | JorisCos | 2021-09-23T15:49:01Z | 10 | 1 | asteroid | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri2Mix",
"dataset:sep_noisy",
"license:cc-by-sa-4.0",
"region:us"
] | audio-to-audio | 2022-03-02T23:29:04Z | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri2Mix
- sep_noisy
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k`
Imported from [Zenodo](https://zenodo.org/record/3874420#.X9I6NcLjJH4)
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri2Mix dataset.
Training config:
```yml
data:
n_src: 2
sample_rate: 8000
segment: 3
task: sep_noisy
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: True
epochs: 200
half_lr: True
num_workers: 4
```
Results:
On Libri2Mix min test set :
```yml
si_sdr: 9.944424856077259
si_sdr_imp: 11.939395359731192
sdr: 10.701526190782072
sdr_imp: 12.481757547845662
sir: 22.633644975545575
sir_imp: 22.45666740833025
sar: 11.131644100944868
sar_imp: 4.248489589311784
stoi: 0.852048619949357
stoi_imp: 0.2071994899565506
```
License notice:
This work "ConvTasNet_Libri2Mix_sepnoisy_8k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri2Mix_sepnoisy_8k" is licensed under A[Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
JorisCos/ConvTasNet_Libri2Mix_sepclean_8k | JorisCos | 2021-09-23T15:48:56Z | 54 | 1 | asteroid | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri2Mix",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] | audio-to-audio | 2022-03-02T23:29:04Z | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri2Mix
- sep_clean
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepclean_8k`
Imported from [Zenodo](https://zenodo.org/record/3873572#.X9M69cLjJH4)
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri2Mix dataset.
Training config:
```yaml
data:
n_src: 2
sample_rate: 8000
segment: 3
task: sep_clean
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: True
epochs: 200
half_lr: True
num_workers: 2
```
Results :
On Libri2Mix min test set :
```yaml
si_sdr: 14.764543634468069
si_sdr_imp: 14.764029375607246
sdr: 15.29337970745095
sdr_imp: 15.114146605113111
sir: 24.092904661115366
sir_imp: 23.913669683141528
sar: 16.06055906916849
sar_imp: -51.980784441287454
stoi: 0.9311142440593033
stoi_imp: 0.21817376142710482
```
License notice:
This work "ConvTasNet_Libri2Mix_sepclean_8k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri2Mix_sepclean_8k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris. |
JorisCos/ConvTasNet_Libri2Mix_sepclean_16k | JorisCos | 2021-09-23T15:48:54Z | 2,594 | 2 | asteroid | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri2Mix",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] | audio-to-audio | 2022-03-02T23:29:04Z | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri2Mix
- sep_clean
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepclean_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri2Mix dataset.
Training config:
```yaml
data:
n_src: 2
sample_rate: 16000
segment: 3
task: sep_clean
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 6
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results :
On Libri2Mix min test set :
```yaml
si_sdr: 15.243671356901526
si_sdr_imp: 15.243034178473609
sdr: 15.668108919568112
sdr_imp: 15.578229918028036
sir: 25.295100756629957
sir_imp: 25.205219921301754
sar: 16.307682590197313
sar_imp: -51.64989963759405
stoi: 0.9394951175291422
stoi_imp: 0.22640192740016568
```
License notice:
This work "ConvTasNet_Libri2Mix_sepclean_16k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri2Mix_sepclean_16k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris. |
JorisCos/ConvTasNet_Libri1Mix_enhsingle_16k | JorisCos | 2021-09-23T15:48:51Z | 89 | 3 | asteroid | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"region:us"
] | audio-to-audio | 2022-03-02T23:29:04Z | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 1
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 6
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 14.743051006476085
si_sdr_imp: 11.293269700616385
sdr: 15.300522933671061
sdr_imp: 11.797860134458015
sir: Infinity
sir_imp: NaN
sar: 15.300522933671061
sar_imp: 11.797860134458015
stoi: 0.9310514162434267
stoi_imp: 0.13513159270288563
```
License notice:
This work "ConvTasNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
groadabike/ConvTasNet_DAMP-VSEP_enhboth | groadabike | 2021-09-23T13:57:35Z | 4 | 0 | asteroid | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:DAMP-VSEP",
"license:cc-by-sa-4.0",
"region:us"
] | audio-to-audio | 2022-03-02T23:29:05Z | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- DAMP-VSEP
license: cc-by-sa-4.0
---
## Asteroid model `groadabike/ConvTasNet_DAMP-VSEP_enhboth`
Imported from [Zenodo](https://zenodo.org/record/3994193)
### Description:
This model was trained by Gerardo Roa Dabike using Asteroid. It was trained on the enh_both task of the DAMP-VSEP dataset.
### Training config:
```yaml
data:
channels: 1
n_src: 2
root_path: data
sample_rate: 16000
samples_per_track: 10
segment: 3.0
task: enh_both
filterbank:
kernel_size: 20
n_filters: 256
stride: 10
main_args:
exp_dir: exp/train_convtasnet
help: None
masknet:
bn_chan: 256
conv_kernel_size: 3
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 4
n_src: 2
norm_type: gLN
skip_chan: 256
optim:
lr: 0.0003
optimizer: adam
weight_decay: 0.0
positional arguments:
training:
batch_size: 12
early_stop: True
epochs: 50
half_lr: True
num_workers: 12
```
### Results:
```yaml
si_sdr: 14.018196157142519
si_sdr_imp: 14.017103133809577
sdr: 14.498517291333885
sdr_imp: 14.463389151567865
sir: 24.149634529133372
sir_imp: 24.11450638936735
sar: 15.338597389045935
sar_imp: -137.30634122401517
stoi: 0.7639416744417206
stoi_imp: 0.1843383526963759
```
### License notice:
This work "ConvTasNet_DAMP-VSEP_enhboth" is a derivative of DAMP-VSEP: Smule Digital Archive of Mobile Performances - Vocal Separation (Version 1.0.1) by Smule, Inc, used under Smule's Research Data License Agreement (Research only). "ConvTasNet_DAMP-VSEP_enhboth" is licensed under Attribution-ShareAlike 3.0 Unported by Gerardo Roa Dabike.
|
flax-community/nordic-roberta-wiki | flax-community | 2021-09-23T13:53:50Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"feature-extraction",
"swedish",
"fill-mask",
"sv",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: sv
license: cc-by-4.0
tags:
- swedish
- roberta
pipeline_tag: fill-mask
widget:
- text: Meninged med livet är <mask>.
---
# Nordic Roberta Wikipedia
## Description
Nord roberta model trainined on the swedish danish and norwegian wikipedia.
## Evaluation
Evaluation on Named Entity recognition in Danish.
I finetuned each model on 3 epochs on DaNE, repeated it 5 times for each model, and calculated 95% confidence intervals for the means. Here are the results:
xlm-roberta-base : 88.01 +- 0.43
flax-community/nordic-roberta-wiki: 85.75 +- 0.69 (this model)
Maltehb/danish-bert-botxo: 85.38 +- 0.55
flax-community/roberta-base-danish: 80.14 +- 1.47
flax-community/roberta-base-scandinavian : 78.03 +- 3.02
Maltehb/-l-ctra-danish-electra-small-cased: 57.87 +- 3.19
NbAiLab/nb-bert-base : 30.24 +- 1.21
Randomly initialised RoBERTa model: 19.79 +- 2.00
Evaluation on Sentiment analysis in Dansish
Here are the results on test set, where each model has been trained 5 times, and the “+-” refers to a 95% confidence interval of the mean score:
Maltehb/danish-bert-botxo: 65.19 +- 0.53
NbAiLab/nb-bert-base : 63.80 +- 0.77
xlm-roberta-base : 63.55 +- 1.59
flax-community/nordic-roberta-wiki : 56.46 +- 1.77
flax-community/roberta-base-danish : 54.73 +- 8.96
flax-community/roberta-base-scandinavian : 44.28 +- 9.21
Maltehb/-l-ctra-danish-electra-small-cased : 47.78 +- 12.65
Randomly initialised RoBERTa model: 36.96 +- 1.02
Maltehb/roberta-base-scandinavian : 33.65 +- 8.32
## Model series
This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge.
## Gpt models
## Swedish Gpt
https://huggingface.co/birgermoell/swedish-gpt/
## Swedish gpt wiki
https://huggingface.co/flax-community/swe-gpt-wiki
# Nordic gpt wiki
https://huggingface.co/flax-community/nordic-gpt-wiki
## Dansk gpt wiki
https://huggingface.co/flax-community/dansk-gpt-wiki
## Norsk gpt wiki
https://huggingface.co/flax-community/norsk-gpt-wiki
## Roberta models
## Nordic Roberta Wiki
https://huggingface.co/flax-community/nordic-roberta-wiki
## Swe Roberta Wiki Oscar
https://huggingface.co/flax-community/swe-roberta-wiki-oscar
## Roberta Swedish Scandi
https://huggingface.co/birgermoell/roberta-swedish-scandi
## Roberta Swedish
https://huggingface.co/birgermoell/roberta-swedish
## Swedish T5 model
https://huggingface.co/birgermoell/t5-base-swedish
|
csae8092/de_MRP_NER | csae8092 | 2021-09-23T13:46:35Z | 8 | 0 | spacy | [
"spacy",
"token-classification",
"de",
"license:cc-by-4.0",
"model-index",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
tags:
- spacy
- token-classification
language:
- de
license: cc-by-4.0
model-index:
- name: de_MRP_NER
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.905255631
- name: NER Recall
type: recall
value: 0.8568527919
- name: NER F Score
type: f_score
value: 0.8803894298
---
NER Model for 'Ministerratsprotokolle'
| Feature | Description |
| --- | --- |
| **Name** | `de_MRP_NER` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.1.0,<3.2.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | `cc-by` |
| **Author** | [Peter Andorfer]() |
### Label Scheme
<details>
<summary>View label scheme (4 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `GPE`, `LOC`, `ORG`, `PER` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 88.04 |
| `ENTS_P` | 90.53 |
| `ENTS_R` | 85.69 |
| `TOK2VEC_LOSS` | 40077.56 |
| `NER_LOSS` | 77727.57 |
|
colorfulscoop/bert-base-ja | colorfulscoop | 2021-09-23T13:46:05Z | 18 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: ja
datasets: wikipedia
pipeline_tag: fill-mask
widget:
- text: 得意な科目は[MASK]です。
license: cc-by-sa-4.0
---
# BERT base Japanese model
This repository contains a BERT base model trained on Japanese Wikipedia dataset.
## Training data
[Japanese Wikipedia](https://ja.wikipedia.org/wiki/Wikipedia:データベースダウンロード) dataset as of June 20, 2021 which is released under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) is used for training.
The dataset is splitted into three subsets - train, valid and test. Both tokenizer and model are trained with the train split.
## Model description
The model architecture is the same as BERT base model (hidden_size: 768, num_hidden_layers: 12, num_attention_heads: 12, max_position_embeddings: 512) except for a vocabulary size.
The vocabulary size is set to 32,000 instead of an original size of 30,522.
For the model, `transformers.BertForPreTraining` is used.
## Tokenizer description
[SentencePiece](https://github.com/google/sentencepiece) tokenizer is used as a tokenizer for this model.
While training, the tokenizer model was trained with 1,000,000 samples which were extracted from the train split.
The vocabulary size is set to 32,000. A `add_dummy_prefix` option is set to `True` because words are not separated by whitespaces in Japanese.
After training, the model is imported to `transformers.DebertaV2Tokenizer` because it supports SentencePiece models and its behavior is consistent when `use_fast` option is set to `True` or `False`.
**Note:**
The meaning of "consistent" here is as follows.
For example, AlbertTokenizer provides AlbertTokenizer and AlbertTokenizerFast. Fast model is used as default. However, the tokenization behavior between them is different and a behavior this mdoel expects is the verions of not fast.
Although `use_fast=False` option passing to AutoTokenier or pipeline solves this problem to force to use not fast version of the tokenizer, this option cannot be passed to config.json or model card.
Therefore unexpected behavior happens when using Inference API. To avoid this kind of problems, `transformers.DebertaV2Tokenizer` is used in this model.
## Training
Training details are as follows.
* gradient update is every 256 samples (batch size: 8, accumulate_grad_batches: 32)
* gradient clip norm is 1.0
* Learning rate starts from 0 and linearly increased to 0.0001 in the first 10,000 steps
* The training set contains around 20M samples. Because 80k * 256 ~ 20M, 1 epochs has around 80k steps.
Trainind was conducted on Ubuntu 18.04.5 LTS with one RTX 2080 Ti.
The training continued until validation loss got worse. Totally the number of training steps were around 214k.
The test set loss was 2.80 .
Training code is available in [a GitHub repository](https://github.com/colorfulscoop/bert-ja).
## Usage
First, install dependecies.
```sh
$ pip install torch==1.8.0 transformers==4.8.2 sentencepiece==0.1.95
```
Then use `transformers.pipeline` to try mask fill task.
```sh
>>> import transformers
>>> pipeline = transformers.pipeline("fill-mask", "colorfulscoop/bert-base-ja", revision="v1.0")
>>> pipeline("専門として[MASK]を専攻しています")
[{'sequence': '専門として工学を専攻しています', 'score': 0.03630176931619644, 'token': 3988, 'token_str': '工学'}, {'sequence': '専門として政治学を専攻しています', 'score': 0.03547220677137375, 'token': 22307, 'token_str': '政治学'}, {'sequence': '専門として教育を専攻しています', 'score': 0.03162326663732529, 'token': 414, 'token_str': '教育'}, {'sequence': '専門として経済学を専攻しています', 'score': 0.026036914438009262, 'token': 6814, 'token_str': '経済学'}, {'sequence': '専門として法学を専攻しています', 'score': 0.02561848610639572, 'token': 10810, 'token_str': '法学'}]
```
Note: specifying a `revision` option is recommended to keep reproducibility when downloading a model via `transformers.pipeline` or `transformers.AutoModel.from_pretrained` .
## License
Copyright (c) 2021 Colorful Scoop
All the models included in this repository are licensed under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
**Disclaimer:** The model potentially has possibility that it generates similar texts in the training data, texts not to be true, or biased texts. Use of the model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output.
---
This model utilizes the following data as training data
* **Name:** ウィキペディア (Wikipedia): フリー百科事典
* **Credit:** https://ja.wikipedia.org/
* **License:** [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/)
* **Link:** https://ja.wikipedia.org/
|
tohoku-nlp/bert-large-japanese | tohoku-nlp | 2021-09-23T13:45:41Z | 1,246 | 9 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: 東北大学で[MASK]の研究をしています。
---
# BERT large Japanese (unidic-lite with whole word masking, jawiki-20200831)
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in [unidic-lite](https://pypi.org/project/unidic-lite/) package), followed by the WordPiece subword tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v2.0).
## Model architecture
The model architecture is the same as the original BERT large model; 24 layers, 1024 dimensions of hidden states, and 16 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.
The generated corpus files are 4.0GB in total, containing approximately 30M sentences.
We used the [MeCab](https://taku910.github.io/mecab/) morphological parser with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) dictionary to split texts into sentences.
## Tokenization
The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
We used [`fugashi`](https://github.com/polm/fugashi) and [`unidic-lite`](https://github.com/polm/unidic-lite) packages for the tokenization.
## Training
The models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
For training of each model, we used a v3-8 instance of Cloud TPUs provided by [TensorFlow Research Cloud program](https://www.tensorflow.org/tfrc/).
The training took about 5 days to finish.
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
## Acknowledgments
This model is trained with Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
|
tohoku-nlp/bert-base-japanese-char-v2 | tohoku-nlp | 2021-09-23T13:45:24Z | 136,559 | 6 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: 東北大学で[MASK]の研究をしています。
---
# BERT base Japanese (character-level tokenization with whole word masking, jawiki-20200831)
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in [unidic-lite](https://pypi.org/project/unidic-lite/) package), followed by character-level tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v2.0).
## Model architecture
The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020.
The generated corpus files are 4.0GB in total, containing approximately 30M sentences.
We used the [MeCab](https://taku910.github.io/mecab/) morphological parser with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) dictionary to split texts into sentences.
## Tokenization
The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into characters.
The vocabulary size is 6144.
We used [`fugashi`](https://github.com/polm/fugashi) and [`unidic-lite`](https://github.com/polm/unidic-lite) packages for the tokenization.
## Training
The models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
For training of each model, we used a v3-8 instance of Cloud TPUs provided by [TensorFlow Research Cloud program](https://www.tensorflow.org/tfrc/).
The training took about 5 days to finish.
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
## Acknowledgments
This model is trained with Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
|
arijitx/wav2vec2-large-xlsr-bengali | arijitx | 2021-09-23T13:07:14Z | 118 | 6 | transformers | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"bn",
"audio",
"speech",
"dataset:OpenSLR",
"license:cc-by-sa-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: Bengali
datasets:
- OpenSLR
metrics:
- wer
tags:
- bn
- audio
- automatic-speech-recognition
- speech
license: cc-by-sa-4.0
model-index:
- name: XLSR Wav2Vec2 Bengali by Arijit
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR
type: OpenSLR
args: ben
metrics:
- name: Test WER
type: wer
value: 32.45
---
# Wav2Vec2-Large-XLSR-Bengali
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) Bengali using a subset of 40,000 utterances from [Bengali ASR training data set containing ~196K utterances](https://www.openslr.org/53/). Tested WER using ~4200 held out from training.
When using this model, make sure that your speech input is sampled at 16kHz.
Train Script can be Found at : train.py
Data Prep Notebook : https://colab.research.google.com/drive/1JMlZPU-DrezXjZ2t7sOVqn7CJjZhdK2q?usp=sharing
Inference Notebook : https://colab.research.google.com/drive/1uKC2cK9JfUPDTUHbrNdOYqKtNozhxqgZ?usp=sharing
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained("arijitx/wav2vec2-large-xlsr-bengali")
model = Wav2Vec2ForCTC.from_pretrained("arijitx/wav2vec2-large-xlsr-bengali")
# model = model.to("cuda")
resampler = torchaudio.transforms.Resample(TEST_AUDIO_SR, 16_000)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch)
speech = resampler(speech_array).squeeze().numpy()
return speech
speech_array = speech_file_to_array_fn("test_file.wav")
inputs = processor(speech_array, sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
preds = processor.batch_decode(predicted_ids)[0]
print(preds.replace("[PAD]",""))
```
**Test Result**: WER on ~4200 utterance : 32.45 %
|
huggingartists/muse | huggingartists | 2021-09-23T11:41:30Z | 12 | 3 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/muse",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/muse
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/26f575585ec649d88d09a1e402bb936b.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Muse</div>
<a href="https://genius.com/artists/muse">
<div style="text-align: center; font-size: 14px;">@muse</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Muse.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/muse).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/muse")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3w58rwod/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Muse's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3j03atcr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3j03atcr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/muse')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/muse")
model = AutoModelWithLMHead.from_pretrained("huggingartists/muse")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
hiiamsid/BETO_es_binary_classification | hiiamsid | 2021-09-23T11:16:37Z | 7 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"es",
"ticket classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- es
tags:
- es
- ticket classification
license: "apache-2.0"
datasets:
- self made to classify whether text is related to technology or not.
metrics:
- fscore
- accuracy
- precision
- recall
---
# BETO(cased)
This model was built using pytorch.
## Model description
Input for the model: Any spanish text
Output for the model: Sentiment. (0 - Negative, 1 - Positive(i.e. technology relate))
#### How to use
Here is how to use this model to get the features of a given text in *PyTorch*:
```python
# You can include sample code which will be formatted
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("hiiamsid/BETO_es_binary_classification")
model = AutoModelForSequenceClassification.from_pretrained("hiiamsid/BETO_es_binary_classification")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Training procedure
I trained on the dataset on the [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased).
|
vishalz/paraphrase_model | vishalz | 2021-09-23T10:00:25Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | pegasus paraphraser model
using <a href="https://huggingface.co/tuner007/pegasus_paraphrase" target="_blank">tuner007/pegasus_paraphrase</a> |
databuzzword/JointBERT-atis | databuzzword | 2021-09-22T20:57:03Z | 8 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | https://github.com/monologg/JointBERT |
flax-community/RoBERTa-large-finnish | flax-community | 2021-09-22T17:31:14Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"finnish",
"fi",
"dataset:mc4",
"arxiv:1907.11692",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language:
- fi
license: apache-2.0
tags:
- finnish
- roberta
datasets:
- mc4
widget:
- text: "Moikka olen <mask> kielimalli."
---
# NOTE: We have trained newer and better Finnish RoBERTa large model which can be found from different repository: [https://huggingface.co/Finnish-NLP/roberta-large-finnish](https://huggingface.co/Finnish-NLP/roberta-large-finnish). Our future Finnish models will be available at the [Finnish-NLP](https://huggingface.co/Finnish-NLP) Hugging Face organization
# RoBERTa large model for Finnish
Pretrained model on Finnish language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: it
makes a difference between finnish and Finnish.
## Model description
RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the RoBERTa model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='flax-community/RoBERTa-large-finnish')
>>> unmasker("Moikka olen <mask> kielimalli.")
[{'sequence': 'Moikka olen uusi kielimalli.',
'score': 0.05129234120249748,
'token': 1825,
'token_str': ' uusi'},
{'sequence': 'Moikka olen toinen kielimalli.',
'score': 0.03112379088997841,
'token': 2194,
'token_str': ' toinen'},
{'sequence': 'Moikka olen myös kielimalli.',
'score': 0.025534993037581444,
'token': 491,
'token_str': ' myös'},
{'sequence': 'Moikka olen ensimmäinen kielimalli.',
'score': 0.020146571099758148,
'token': 2832,
'token_str': ' ensimmäinen'},
{'sequence': 'Moikka olen vapaa kielimalli.',
'score': 0.018089469522237778,
'token': 2257,
'token_str': ' vapaa'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('flax-community/RoBERTa-large-finnish')
model = RobertaModel.from_pretrained('flax-community/RoBERTa-large-finnish')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('flax-community/RoBERTa-large-finnish')
model = TFRobertaModel.from_pretrained('flax-community/RoBERTa-large-finnish', from_pt=True)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions.
## Training data
This Finnish RoBERTa model was pretrained on the combination of two datasets:
- [mc4](https://huggingface.co/datasets/mc4), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset
- [Yle Finnish News Archive](http://urn.fi/urn:nbn:fi:lb-2017070501)
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 51GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with `<s>` and the end of one by `</s>`
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the Hugging Face JAX/Flax community week event, for 2 epochs with a sequence length of 128 and continuing for one more epoch with a sequence length of 512. The optimizer used is Adafactor with a learning rate of 2e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and \\(\epsilon = 1e-6\\), learning rate warmup for 1500 steps and linear decay of the learning rate after.
## Evaluation results
Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) and to our newer [Finnish RoBERTa-large](https://huggingface.co/Finnish-NLP/roberta-large-finnish) trained with larger dataset:
| | Average | Yle News 128 length | Yle News 512 length | Eduskunta 128 length |
|----------------------------------------|----------|---------------------|---------------------|----------------------|
|flax-community/RoBERTa-large-finnish |87.72 |94.42 |95.06 |73.67 |
|Finnish-NLP/roberta-large-finnish |88.02 |94.53 |95.23 |74.30 |
|TurkuNLP/bert-base-finnish-cased-v1 |**88.82** |**94.90** |**95.49** |**76.07** |
To conclude, this model slightly loses to our newer [Finnish RoBERTa-large](https://huggingface.co/Finnish-NLP/roberta-large-finnish) model trained with larger dataset and also slightly loses to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model.
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
- Tommi Vehviläinen [Hugging Face profile](https://huggingface.co/Tommi)
Feel free to contact us for more details 🤗 |
databuzzword/JointBERT-snips | databuzzword | 2021-09-22T14:02:14Z | 6 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | https://github.com/monologg/JointBERT |
imthanhlv/t5vi | imthanhlv | 2021-09-22T09:57:47Z | 13 | 1 | transformers | [
"transformers",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | # T5 Vietnamese pretrain on news corpus
|
castorini/ance-dpr-context-multi | castorini | 2021-09-22T09:41:18Z | 110 | 2 | transformers | [
"transformers",
"pytorch",
"dpr",
"arxiv:2007.00808",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | This model is converted from the original ANCE [repo](https://github.com/microsoft/ANCE) and fitted into Pyserini:
> Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. [Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval](https://arxiv.org/pdf/2007.00808.pdf)
For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-ance.md)
|
sadakmed/distiluse-base-multilingual-cased-v1 | sadakmed | 2021-09-22T09:37:18Z | 8 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"DistilBert",
"Universal Sentence Encoder",
"sentence-embeddings",
"sentence-similarity",
"multilingual",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
language: multilingual
tags:
- DistilBert
- Universal Sentence Encoder
- sentence-embeddings
- sentence-transformers
- sentence-similarity
license: apache-2.0
---
Knowledge distilled version of multilingual Universal Sentence Encoder. Supports 15 languages: Arabic, Chinese, Dutch, English, French, German, Italian, Korean, Polish, Portuguese, Russian, Spanish, Turkish.
This Model is saved from 'distiluse-base-multilingual-cased-v1' in `sentence-transformers`, to be used directly from `transformers`
Note that ST has additional two layers(Pooling, Linear), that cannot be saved in any predefined model in HG. |
ozcangundes/mt5-small-turkish-summarization | ozcangundes | 2021-09-22T09:31:27Z | 299 | 19 | transformers | [
"transformers",
"pytorch",
"jax",
"mt5",
"text2text-generation",
"summarization",
"tr",
"dataset:MLSUM",
"arxiv:2004.14900",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
language: tr
datasets:
- MLSUM
pipeline_tag: summarization
license: mit
---
# mT5-small based Turkish Summarization System
[Google's Multilingual T5-small](https://github.com/google-research/multilingual-t5) is fine-tuned on [MLSUM Turkish news dataset](https://github.com/recitalAI/MLSUM) for **Summarization** downstream task by using Pytorch Lightning.⚡
mT5 small model has 300 million parameters and model size is about 1.2GB. Therefore, it takes significant amount of time to fine tune it. The model is trained with 10 epochs, 8 batch size and 10e-4 learning rate. It took almost 4 hours. The max news length is kept as 784 and max summary length is determined as 64.
**Important Note**: mT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
excluding any supervised training. Therefore, the mT5 model has to be fine-tuned before it is useable on a downstream task.
## Dataset
MLSUM dataset has more than 250K Turkish news with their related summaries. Since the mT5 model size and vocabulary is so large, 20K data is used for training and 4K data is used for validation. For more information about the dataset, please read this [great paper](https://arxiv.org/abs/2004.14900).
## Usage 🚀
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("ozcangundes/mt5-small-turkish-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("ozcangundes/mt5-small-turkish-summarization")
def generate_summary(main_news):
source_encoding=tokenizer(
main_news,
max_length=784,
padding="max_length",
truncation=True,
return_attention_mask=True,
add_special_tokens=True,
return_tensors="pt")
generated_ids=model.generate(
input_ids=source_encoding["input_ids"],
attention_mask=source_encoding["attention_mask"],
num_beams=2,
max_length=120,
repetition_penalty=2.5,
length_penalty=2.0,
early_stopping=True,
use_cache=True
)
preds=[tokenizer.decode(gen_id, skip_special_tokens=True, clean_up_tokenization_spaces=True)
for gen_id in generated_ids]
return "".join(preds)
```
### Example 1
```python
main_news= "Final etabının üçüncü karşılaşması 29 Nisan Pazartesi günü saat 18.00 ’ de Burhan Felek
Voleybol Salonu ’ nda oynanacak . Sezonu FIVB Kulüpler Dünya Şampiyonluğu ile açan ve CEV
Avrupa Şampiyonlar Ligi'ni üçüncü olarak tamamlayan VakıfBank Kadın Voleybol Takımı ,
Vestel Venus Sultanlar Ligi final serisi ikinci maçında Eczacıbaşı VitrA'yı VakıfBank
Spor Sarayı'nda 16-25 , 25-10 , 25-18 ve 25-17'lik setlerle 3-1 mağlup ederek seride durumu
1-1 ' e getirdi . İlk setini 25-16 kaybettiği karşılaşmanın ikinci setinde etkili servisler
kullanan sarı-siyahlılar , teknik molasına 12-5 önde girdiği seti 25-10 almayı başardı .
Etkili servis performansını üçüncü sette de sürdüren VakıfBank , teknik molasına 12-5 önde
girdiği seti 25-18 alarak , karşılaşmada 2-1 öne geçti . Dördüncü sette rakibinin geri dönüşüne
izin vermeyen VakıfBank , seti 25-17 , maçı da 3-1 kazanarak seride durumu eşitledi."
generate_summary(main_news)
#original summary -> "Vestel Venus Sultanlar Ligi final etabı ikinci karşılaşmasında VakıfBank
kendi sahasında Eczacıbaşı VitrA'yı 3-1 mağlup etti ve seride durumu 1-1 ' e getirdi ."
#output -> "CEV Avrupa Şampiyonlar Ligi'ni üçüncü olarak tamamlayan VakıfBank Kadın Voleybol Takımı,
Vestel Venus Sultanlar Ligi final serisi ikinci maçında Eczacıbaşı VitrA'yı 3-1 mağlup
ederek seride durumu 1-1'e getirdi."
```
### Example 2
```python
main_news="2023'te yerli tank motoru : Bir taraftan da tankın motorunu yerlileştirmeye çalıştıklarını
ifade eden Öztürk , şu değerlendirmelerde bulundu : `` Bin 500 beygirlik , şanzımanıyla beraber
motoru yerlileştirmeye çalışıyoruz . Bu da bir aksilik çıkmazsa ilk tankımızın üzerine
2023'te koyacağız . Bundan sonra hiçbir ülkeye bağımlılığımız kalmadan bu araçları üretmeye
devam edeceğiz . Sorumluluğumuzun ağır olduğunu biliyoruz . Ülkemize hizmet etmeye çalışıyoruz .
Bunu daha da ileriye götürmek için elimizden gelen çabayı sarf ediyoruz . Ama bu tek başınıza
yapılan bir operasyon değil . Türkiye'deki yerli firmalarla beraber ortaklaşa bu işi yürütmeye çalışıyoruz."
generate_summary(main_news)
#output -> "TÜRKİYE'de bir taraftan da tankın motorunu yerlileştirmeye çalıştıklarını belirten Öztürk,
`` Bin 500 beygirlik, şanzımanıyla beraber motoru yerlileştirmeye çalışıyoruz. Bu da bir
aksilik çıkmazsa ilk tankımızın üzerine 2023'te koyacağız.'' dedi."
```
Created by Özcan Gündeş ✌️
---
Twitter: <a href="https://twitter.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/twitter.svg" alt="ozcangundes" height="30" width="30" /></a>
Linkedin: <a href="https://www.linkedin.com/in/%C3%B6zcan-g%C3%BCnde%C5%9F-7693055b/" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/linkedin.svg" alt="13198517" height="30" width="30" /></a>
Medium: <a href="https://medium.com/@ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/medium.svg" alt="@ozcangundes" height="30" width="30" /></a>
Github: <a href="https://github.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/github.svg" alt="@ozcangundes" height="30" width="30" /></a>
|
ozcangundes/T5-base-for-BioQA | ozcangundes | 2021-09-22T09:31:21Z | 25 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"question-answering",
"dataset:bioASQ",
"arxiv:1910.10683",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
language: english
datasets:
- bioASQ
pipeline_tag: question-answering
license: mit
---
# T5-base model fine-tuned on BioASQ for Biological Question Answering 👩⚕️👨⚕️
[Google's T5-base](https://huggingface.co/t5-base) fine-tuned on [BioASQ](https://github.com/dmis-lab/biobert) (secondary task) for **Q&A** downstream task.
## Details of T5
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Dependencies
transformers == 4.3.3
sentencepiece >= 0.1.94
## Usage 🚀
```python
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("ozcangundes/T5-base-for-BioQA")
model = T5ForConditionalGeneration.from_pretrained("ozcangundes/T5-base-for-BioQA")
def get_answer(question,context):
source_encoding=tokenizer(
question,
context,
max_length=512,
padding="max_length",
truncation="only_second",
return_attention_mask=True,
add_special_tokens=True,
return_tensors="pt")
generated_ids=model.generate(
input_ids=source_encoding["input_ids"],
attention_mask=source_encoding["attention_mask"])
preds=[tokenizer.decode(gen_id, skip_special_tokens=True, clean_up_tokenization_spaces=True) for gen_id in generated_ids]
return "".join(preds)
```
### Example 1
```python
question={
"context":"Effect of food on the pharmacokinetics of empagliflozin, a sodium glucose cotransporter 2 (SGLT2) inhibitor, and assessment of dose proportionality in healthy volunteers. OBJECTIVES: Empagliflozin is an orally available, potent and highly selective inhibitor of the sodium glucose cotransporter 2 (SGLT2). This study was undertaken to investigate the effect of food on the pharmacokinetics of 25 mg empagliflozin and to assess dose proportionality between 10 mg and 25 mg empagliflozin under fasted conditions. MATERIALS AND METHODS: In this open-label, 3-way, cross-over study, 18 healthy volunteers received 3 single doses of empagliflozin in a randomized sequence (25 mg empagliflozin under fasted conditions, 25 mg empagliflozin after a high-fat, high-calorie breakfast and 10 mg empagliflozin under fasted conditions), each separated by a washout period of at least 7 days. Serial plasma samples were collected at selected time points over a period of 72 hours. RESULTS: Administration with food had no clinically relevant effect on the area under the plasma concentration-time curve (AUC0-∞) of empagliflozin (geometric mean ratio (GMR): 84.04, 90% confidence interval (CI): 80.86 - 87.34). The decrease observed in the maximum plasma concentrations (Cmax) of empagliflozin (GMR: 63.22, 90% CI: 56.74 - 70.44) when administered with food was not considered clinically meaningful. The increases in AUC0-∞ and Cmax for 10 mg vs. 25 mg empagliflozin administered under fasting conditions were roughly dose-proportional, as demonstrated by the slope β of the regression lines being slightly less than 1 (slope β for AUC0-∞: 0.94, 95% CI: 0.90 - 0.97; slope β for Cmax: 0.91, 95% CI: 0.80 - 1.01). Empagliflozin was well tolerated under fed and fasting conditions. CONCLUSIONS: The results support administration of empagliflozin tablets independently of food. Increases in empagliflozin exposure under fasting conditions were roughly dose-proportional between 10 mg and 25 mg empagliflozin.",
"question":"Which protein does empagliflozin inhibit?"
}
get_answer(question["question"],question["context"])
```
> SGLT2
### Example 2
```python
question2={
"context":"Dermatitis herpetiformis: jejunal findings and skin response to gluten free diet. Fifty seven children with dermatitis herpetiformis, 18 from Finland and 39 from Hungary, were studied. Diagnostic criteria included the finding of granular IgA deposits in the skin of all patients. The mean age at onset of the rash was 7 X 2 years and favoured sites were the elbows, knees, and buttocks. Symptoms suggesting small intestinal disease were rare but in 35 (61%) of the children subtotal villous atrophy and in 16 (28%) partial villous atrophy were found on jejunal biopsy. Eighteen children underwent a second biopsy after a mean of 21 months on a gluten free diet; villous height was found to be increased and the intraepithelial lymphocyte count decreased in all these patients. Gluten challenge caused a reversal in the two children who underwent a third biopsy. The effect of the gluten free diet on the rash was examined in Finnish children by observing the daily requirements of dapsone, a drug used to control the rash at the beginning of the diet. Eight (67%) of the 12 children were able to stop taking dapsone after a mean of 11 months on the diet and all three patients treated with diet alone became asymptomatic after three to 6 months on the diet. These results confirm that most children with dermatitis herpetiformis have jejunal villous atrophy, though they rarely have gastrointestinal symptoms. The central role of gluten in childhood dermatitis herpetiformis is evidenced by the fact that a gluten free diet helps the damaged jejunal mucosa to recover and controls the rash even in those children who do not have an abnormal jejunal biopsy.",
"question":"What is the typical rash associated with gluten?"
}
get_answer(question2["question"],question2["context"])
```
> dermatitis herpetiformis
Created by Özcan Gündeş ✌️
---
Twitter: <a href="https://twitter.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/twitter.svg" alt="ozcangundes" height="30" width="30" /></a>
Linkedin: <a href="https://www.linkedin.com/in/%C3%B6zcan-g%C3%BCnde%C5%9F-7693055b/" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/linkedin.svg" alt="13198517" height="30" width="30" /></a>
Medium: <a href="https://medium.com/@ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/medium.svg" alt="@ozcangundes" height="30" width="30" /></a>
Github: <a href="https://github.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/github.svg" alt="@ozcangundes" height="30" width="30" /></a>
|
Subsets and Splits