modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingtweets/bowserbot2 | b91221b87a5a6de190fba7029721e0465e4dd793 | 2021-05-21T20:57:44.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/bowserbot2 | 8 | null | transformers | 13,100 | ---
language: en
thumbnail: https://www.huggingtweets.com/bowserbot2/1617402800811/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1345789137035649025/l4ReFavz_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">bowserbot 🤖 AI Bot </div>
<div style="font-size: 15px">@bowserbot2 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@bowserbot2's tweets](https://twitter.com/bowserbot2).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2651 |
| Retweets | 2 |
| Short tweets | 20 |
| Tweets kept | 2629 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/151rlno6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bowserbot2's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/15w12pqd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/15w12pqd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bowserbot2')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/ddlcquotes | 22c0f300d5cd8dcab73087e243648829653afacb | 2021-05-22T00:56:45.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/ddlcquotes | 8 | null | transformers | 13,101 | ---
language: en
thumbnail: https://www.huggingtweets.com/ddlcquotes/1612815814568/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1166296863068360704/9Rbf-i7O_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">ddlc quote bot 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@ddlcquotes bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@ddlcquotes's tweets](https://twitter.com/ddlcquotes).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3203</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>0</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>27</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>3176</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3vugceit/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ddlcquotes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1rh6mzov) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1rh6mzov/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/ddlcquotes'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/fallexcy | 1f6a1a16e01e3c905736d536ab35135789b1521e | 2021-05-22T03:51:15.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/fallexcy | 8 | null | transformers | 13,102 | ---
language: en
thumbnail: https://www.huggingtweets.com/fallexcy/1614134311978/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1339271682679312391/_937loJu_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">α(lєх)αndrα 🤖 AI Bot </div>
<div style="font-size: 15px">@fallexcy bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@fallexcy's tweets](https://twitter.com/fallexcy).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 408 |
| Retweets | 48 |
| Short tweets | 21 |
| Tweets kept | 339 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wda9s2r7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fallexcy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/10eje3u5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/10eje3u5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fallexcy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/feyerabender | aa3a2363d30fceb36487d7a97ed3aa0977953b4b | 2021-05-22T04:08:54.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/feyerabender | 8 | null | transformers | 13,103 | ---
language: en
thumbnail: https://www.huggingtweets.com/feyerabender/1616669524008/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1370161206158360579/_G9rCdzT_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Rory Dean ☭ 🤖 AI Bot </div>
<div style="font-size: 15px">@feyerabender bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@feyerabender's tweets](https://twitter.com/feyerabender).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3195 |
| Retweets | 722 |
| Short tweets | 363 |
| Tweets kept | 2110 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1cjspfal/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @feyerabender's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/17iujs5g) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/17iujs5g/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/feyerabender')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/fullbitchschol1 | 9ecdb597df6528947934b262106c7c823e903f26 | 2021-05-22T04:49:48.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/fullbitchschol1 | 8 | null | transformers | 13,104 | ---
language: en
thumbnail: https://www.huggingtweets.com/fullbitchschol1/1616889911749/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1272946288389050368/OtPFPpC7_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Fullbitchscholar 🤖 AI Bot </div>
<div style="font-size: 15px">@fullbitchschol1 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@fullbitchschol1's tweets](https://twitter.com/fullbitchschol1).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 20 |
| Short tweets | 224 |
| Tweets kept | 3004 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1em7u8my/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fullbitchschol1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2u9ua2kl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2u9ua2kl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fullbitchschol1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/girlshaped | b1622281a5f874fb0257fe302f1ea484be9e78b9 | 2021-05-22T05:33:03.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/girlshaped | 8 | null | transformers | 13,105 | ---
language: en
thumbnail: https://www.huggingtweets.com/girlshaped/1617757456002/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1251080403256926208/6-nJSYgZ_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Anomalous Girl 🤖 AI Bot </div>
<div style="font-size: 15px">@girlshaped bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@girlshaped's tweets](https://twitter.com/girlshaped).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 304 |
| Retweets | 115 |
| Short tweets | 19 |
| Tweets kept | 170 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/35c6178z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @girlshaped's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2re3ffqt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2re3ffqt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/girlshaped')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/glamdemon2004 | f48ff0ebaa72ddf50e69d0ef7087fba670b1edbb | 2021-05-22T05:38:12.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/glamdemon2004 | 8 | null | transformers | 13,106 | ---
language: en
thumbnail: https://www.huggingtweets.com/glamdemon2004/1616682008766/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1355757309063008257/LSlS9j1B_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">elizabeth holmes’s fetus 🤖 AI Bot </div>
<div style="font-size: 15px">@glamdemon2004 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@glamdemon2004's tweets](https://twitter.com/glamdemon2004).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3097 |
| Retweets | 550 |
| Short tweets | 345 |
| Tweets kept | 2202 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2v9xfsja/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @glamdemon2004's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1nyv7aua) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1nyv7aua/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/glamdemon2004')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/igorcarron | ad265c47885faa222ca418b5c1ad4030d9bd4f4c | 2021-05-22T07:50:23.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/igorcarron | 8 | null | transformers | 13,107 | ---
language: en
thumbnail: https://www.huggingtweets.com/igorcarron/1601975366019/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/52435623/igor_400x400.JPG')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Igor Carron 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@igorcarron bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@igorcarron's tweets](https://twitter.com/igorcarron).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3182</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>2986</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>50</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>146</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/2xrk7m5z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @igorcarron's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/kfaaogij) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/kfaaogij/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/igorcarron'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/joebiden | 97e288685e97eae6fbcf0a3f833e27e423b6c250 | 2022-05-27T11:25:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/joebiden | 8 | null | transformers | 13,108 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1308769664240160770/AfgzWVE7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Joe Biden</div>
<div style="text-align: center; font-size: 14px;">@joebiden</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Joe Biden.
| Data | Joe Biden |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 595 |
| Short tweets | 33 |
| Tweets kept | 2621 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1g8y6hlv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @joebiden's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/28xgrtgk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/28xgrtgk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/joebiden')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/jreosquare | 5a5acf34a6f744777f63624766f185db993884d5 | 2021-05-22T10:10:44.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/jreosquare | 8 | null | transformers | 13,109 | ---
language: en
thumbnail: https://www.huggingtweets.com/jreosquare/1614112116009/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1361817928115441667/OjKhZsFO_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">rigel #freebaguette 🤖 AI Bot </div>
<div style="font-size: 15px">@jreosquare bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@jreosquare's tweets](https://twitter.com/jreosquare).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3185 |
| Retweets | 345 |
| Short tweets | 608 |
| Tweets kept | 2232 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3sokv6uq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jreosquare's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1o3c73fh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1o3c73fh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jreosquare')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/lana_ray_dale | 0270d51c7841c45106f228e82b0f66a731b27caa | 2021-07-23T17:36:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/lana_ray_dale | 8 | null | transformers | 13,110 | ---
language: en
thumbnail: https://www.huggingtweets.com/lana_ray_dale/1627061772839/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/439125466340143105/TZaoVrUl_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">R A Y</div>
<div style="text-align: center; font-size: 14px;">@lana_ray_dale</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from R A Y.
| Data | R A Y |
| --- | --- |
| Tweets downloaded | 718 |
| Retweets | 56 |
| Short tweets | 90 |
| Tweets kept | 572 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/37ffw07m/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lana_ray_dale's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/uxn80y7g) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/uxn80y7g/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lana_ray_dale')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/locosherman2 | 3bfaefea8471b50a333e95c8454f34e4faa07573 | 2021-05-22T12:28:09.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/locosherman2 | 8 | null | transformers | 13,111 | ---
language: en
thumbnail: https://www.huggingtweets.com/locosherman2/1616654478302/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1328422822692093953/6g1ZsaQQ_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Sevag 🌐✝️ 🤖 AI Bot </div>
<div style="font-size: 15px">@locosherman2 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@locosherman2's tweets](https://twitter.com/locosherman2).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3130 |
| Retweets | 868 |
| Short tweets | 372 |
| Tweets kept | 1890 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1f0v78we/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @locosherman2's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ckb2yln) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ckb2yln/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/locosherman2')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/louispotok | 0055c78354214b8cbf7db9c3c058d7dfd1156269 | 2021-05-22T12:36:01.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/louispotok | 8 | null | transformers | 13,112 | ---
language: en
thumbnail: https://www.huggingtweets.com/louispotok/1616617329585/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1183250698986819584/UT1qyy3h_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Louis Potok 🤖 AI Bot </div>
<div style="font-size: 15px">@louispotok bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@louispotok's tweets](https://twitter.com/louispotok).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3225 |
| Retweets | 474 |
| Short tweets | 117 |
| Tweets kept | 2634 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/17xl4hbj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @louispotok's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1jwyvv13) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1jwyvv13/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/louispotok')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/milligram3d | 71c2a5fdd1a56b4a8e8c37c9316b2c558eeaeb4c | 2021-05-22T14:46:20.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/milligram3d | 8 | null | transformers | 13,113 | ---
language: en
thumbnail: https://www.huggingtweets.com/milligram3d/1616791387103/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1329940613718949888/ta7GE35b_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">im gay 🤖 AI Bot </div>
<div style="font-size: 15px">@milligram3d bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@milligram3d's tweets](https://twitter.com/milligram3d).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3102 |
| Retweets | 514 |
| Short tweets | 267 |
| Tweets kept | 2321 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2b28e9ko/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @milligram3d's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2dnn0apc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2dnn0apc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/milligram3d')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/quizzicallay | 33fe1a6470ec5d3f52035ea31cc99b6b8c6a8e0d | 2021-05-22T20:05:23.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/quizzicallay | 8 | null | transformers | 13,114 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1298648619587907584/2Re9ioxe_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Danny Lay Ybounden 🤖 AI Bot </div>
<div style="font-size: 15px">@quizzicallay bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@quizzicallay's tweets](https://twitter.com/quizzicallay).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2377 |
| Retweets | 118 |
| Short tweets | 174 |
| Tweets kept | 2085 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/365kvgu8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @quizzicallay's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3heuo0a0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3heuo0a0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/quizzicallay')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/scarlet_platnm | 4805f36fc28b0a2e7eadbff257373a423a1de7f1 | 2021-05-22T22:02:04.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/scarlet_platnm | 8 | null | transformers | 13,115 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1374138228576501763/Tt6KUbNh_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Scarlet 🏳️⚧️ 🤖 AI Bot </div>
<div style="font-size: 15px">@scarlet_platnm bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@scarlet_platnm's tweets](https://twitter.com/scarlet_platnm).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3239 |
| Retweets | 683 |
| Short tweets | 458 |
| Tweets kept | 2098 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3s65gk6s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @scarlet_platnm's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3a49phf4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3a49phf4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/scarlet_platnm')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/scarysmilingdog | e5b9a5b2040bd549985b397840d21ed82ae287a5 | 2021-05-22T22:03:37.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/scarysmilingdog | 8 | null | transformers | 13,116 | ---
language: en
thumbnail: https://www.huggingtweets.com/scarysmilingdog/1618977555882/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1380538178667446273/gNl0y2pb_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Kiko 🤖 AI Bot </div>
<div style="font-size: 15px">@scarysmilingdog bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@scarysmilingdog's tweets](https://twitter.com/scarysmilingdog).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1567 |
| Retweets | 255 |
| Short tweets | 193 |
| Tweets kept | 1119 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/sscoe37w/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @scarysmilingdog's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/62i2trmb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/62i2trmb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/scarysmilingdog')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/sigsys | 0c33d62db0b916db557abe71eb1d65fb96dcfeaa | 2021-05-22T22:56:23.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/sigsys | 8 | null | transformers | 13,117 | ---
language: en
thumbnail: https://www.huggingtweets.com/sigsys/1617904484486/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1215779813560025089/ka9neEZ4_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">PanickedJanet 🤖 AI Bot </div>
<div style="font-size: 15px">@sigsys bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@sigsys's tweets](https://twitter.com/sigsys).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3207 |
| Retweets | 1423 |
| Short tweets | 378 |
| Tweets kept | 1406 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/15vp8xpf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sigsys's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/18htet0h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/18htet0h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sigsys')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/strife212 | fed6b65b535492b7fadf757223a471f3f96f00a5 | 2021-05-23T00:13:20.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/strife212 | 8 | null | transformers | 13,118 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1376707481406214148/rDg9IcWB_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Strife 🤖 AI Bot </div>
<div style="font-size: 15px">@strife212 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@strife212's tweets](https://twitter.com/strife212).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 78 |
| Short tweets | 1147 |
| Tweets kept | 2020 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3kipxik1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @strife212's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/nh0ek96v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/nh0ek96v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/strife212')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/tasshinfogleman | 195c51134c970b5b89dfa4b27071de0189657d6b | 2021-05-23T00:42:22.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/tasshinfogleman | 8 | null | transformers | 13,119 | ---
language: en
thumbnail: https://www.huggingtweets.com/tasshinfogleman/1616620683486/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1296249659153739777/soAVZeYh_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">達真 🤖 AI Bot </div>
<div style="font-size: 15px">@tasshinfogleman bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@tasshinfogleman's tweets](https://twitter.com/tasshinfogleman).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 429 |
| Short tweets | 502 |
| Tweets kept | 2318 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/207tr4m3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tasshinfogleman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2y6icw53) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2y6icw53/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tasshinfogleman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/tonline_news | db77c52b1c414ca23cf64b480446a86e33a50a4d | 2021-05-23T02:40:02.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/tonline_news | 8 | null | transformers | 13,120 | ---
language: en
thumbnail: https://www.huggingtweets.com/tonline_news/1603446279269/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1300377538238218245/IlY5V715_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">t-online 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@tonline_news bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@tonline_news's tweets](https://twitter.com/tonline_news).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3217</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1148</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>36</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2033</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/1tad5tz6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tonline_news's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/cpk5773x) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/cpk5773x/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/tonline_news'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/truck_____er | 64eaaa9400088757ed5b807b0664c67ddc019031 | 2021-05-23T02:49:32.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/truck_____er | 8 | null | transformers | 13,121 | ---
language: en
thumbnail: https://www.huggingtweets.com/truck_____er/1614115630117/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1355239572054159360/2nGkEDrK_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">jonah 🤖 AI Bot </div>
<div style="font-size: 15px">@truck_____er bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@truck_____er's tweets](https://twitter.com/truck_____er).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 390 |
| Retweets | 81 |
| Short tweets | 86 |
| Tweets kept | 223 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1lg8oexk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @truck_____er's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3eb0ihn2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3eb0ihn2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/truck_____er')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/uncannydays | 3248ddc16ceeada3c9a1e538cda56ed3a5bd2fde | 2021-05-23T03:21:20.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/uncannydays | 8 | null | transformers | 13,122 | ---
language: en
thumbnail: https://www.huggingtweets.com/uncannydays/1617745285527/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1377754982502514689/RTQPHdwX_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Dana Ash✨ 🤖 AI Bot </div>
<div style="font-size: 15px">@uncannydays bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@uncannydays's tweets](https://twitter.com/uncannydays).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 60 |
| Short tweets | 490 |
| Tweets kept | 2696 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ppbgefa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @uncannydays's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/a16vdxsh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/a16vdxsh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/uncannydays')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
husnu/electra-small-turkish-uncased-discriminator | 4dce54e463558852b307a8d19c3e9e4a5564b63f | 2022-01-16T19:01:47.000Z | [
"pytorch",
"tensorboard",
"electra",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | husnu | null | husnu/electra-small-turkish-uncased-discriminator | 8 | null | transformers | 13,123 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: ft_electra-small-turkish-uncased-discriminator_lr-2e-1_epochs-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
This model is a fine-tuned version of [loodos/electra-small-turkish-uncased-discriminator](https://huggingface.co/loodos/electra-small-turkish-uncased-discriminator) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.2
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.951 | 1.0 | 5818 | 5.9506 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ibahadiraltun/berturk-social | 505c60b8ff36581465f32fa6c32cab7b9449791c | 2021-05-20T16:54:49.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | ibahadiraltun | null | ibahadiraltun/berturk-social | 8 | null | transformers | 13,124 | Entry not found |
imran2part/DialogGPT-small-Doctor | 75d99191aaabddec2061aa5438414c2872d9b1bb | 2021-09-11T18:56:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | imran2part | null | imran2part/DialogGPT-small-Doctor | 8 | null | transformers | 13,125 | ---
tags:
- conversational
---
# Doctor DialoGPT Model |
infinitejoy/wav2vec2-large-xls-r-300m-greek | b1dae4bfff9c2afb6c1ab49c51a47afff0df6ce9 | 2022-03-24T11:53:50.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"el",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | infinitejoy | null | infinitejoy/wav2vec2-large-xls-r-300m-greek | 8 | null | transformers | 13,126 | ---
language:
- el
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- el
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Greek
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: el
metrics:
- name: Test WER
type: wer
value: 102.23963133640552
- name: Test CER
type: cer
value: 146.28
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: el
metrics:
- name: Test WER
type: wer
value: 99.92
- name: Test CER
type: cer
value: 132.38
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-greek
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - EL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6592
- Wer: 0.4564
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.0928 | 4.42 | 500 | 3.0804 | 1.0073 |
| 1.4505 | 8.85 | 1000 | 0.9038 | 0.7330 |
| 1.2207 | 13.27 | 1500 | 0.7375 | 0.6045 |
| 1.0695 | 17.7 | 2000 | 0.7119 | 0.5441 |
| 1.0104 | 22.12 | 2500 | 0.6069 | 0.5296 |
| 0.9299 | 26.55 | 3000 | 0.6168 | 0.5206 |
| 0.8588 | 30.97 | 3500 | 0.6382 | 0.5171 |
| 0.7942 | 35.4 | 4000 | 0.6048 | 0.4988 |
| 0.7808 | 39.82 | 4500 | 0.6730 | 0.5084 |
| 0.743 | 44.25 | 5000 | 0.6749 | 0.5012 |
| 0.6652 | 48.67 | 5500 | 0.6491 | 0.4735 |
| 0.6386 | 53.1 | 6000 | 0.6928 | 0.4954 |
| 0.5945 | 57.52 | 6500 | 0.6359 | 0.4798 |
| 0.5561 | 61.95 | 7000 | 0.6409 | 0.4799 |
| 0.5464 | 66.37 | 7500 | 0.6452 | 0.4691 |
| 0.5119 | 70.8 | 8000 | 0.6376 | 0.4657 |
| 0.474 | 75.22 | 8500 | 0.6541 | 0.4700 |
| 0.45 | 79.65 | 9000 | 0.6374 | 0.4571 |
| 0.4315 | 84.07 | 9500 | 0.6568 | 0.4625 |
| 0.3967 | 88.5 | 10000 | 0.6636 | 0.4605 |
| 0.3937 | 92.92 | 10500 | 0.6537 | 0.4597 |
| 0.3788 | 97.35 | 11000 | 0.6614 | 0.4589 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
it5/it5-base-question-generation | 8edf4a541bd59ceb67d95df8a6582140a23a83ed | 2022-03-09T08:06:11.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:squad_it",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"question-generation",
"squad_it",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | it5 | null | it5/it5-base-question-generation | 8 | null | transformers | 13,127 | ---
language:
- it
license: apache-2.0
datasets:
- squad_it
tags:
- italian
- sequence-to-sequence
- question-generation
- squad_it
- text2text-generation
widget:
- text: "Le conoscenze mediche erano stagnanti durante il Medioevo. Il resoconto più autorevole di allora è venuto dalla facoltà di medicina di Parigi in un rapporto al re di Francia che ha incolpato i cieli, sotto forma di una congiunzione di tre pianeti nel 1345 che causò una \"grande pestilenza nell' aria\". Questa relazione è diventata la prima e più diffusa di una serie di casi di peste che cercava di dare consigli ai malati. Che la peste fosse causata dalla cattiva aria divenne la teoria più accettata. Oggi, questo è conosciuto come la teoria di Miasma. La parola \"peste\" non aveva un significato particolare in questo momento, e solo la ricorrenza dei focolai durante il Medioevo gli diede il nome che è diventato il termine medico. Risposta: re di Francia"
- text: "Il 14 aprile 2011, ABC ha annullato le lunghe opere di sapone All My Children e One Life to Live dopo 41 e 43 anni in onda, rispettivamente (in seguito al contraccolpo dei tifosi, ABC ha venduto i diritti ad entrambi gli spettacoli a Prospect Park, che alla fine ha rilanciato i saponi su Hulu per un' ulteriore stagione nel 2013 e con entrambe le società che si citano in giudizio per accuse di interferenza con il processo di rilancio degli spettacoli, mancato pagamento delle tasse di licenza. Il talk/lifestyle show che ha sostituito One Life to Live, The Revolution, non è riuscito a generare giudizi soddisfacenti ed è stato a sua volta annullato dopo soli sette mesi. La stagione 2011-12 ha visto l' ABC cadere al quarto posto nel 18-49 demografico nonostante rinnovando una manciata di nuovi spettacoli (compresi i drammi matricole Scandal, Revenge e Once Upon a Time) per la seconda stagione. Risposta: Hulu"
- text: "L' American Broadcasting Company (ABC) (stlized nel suo logo come abc dal 1957) è una rete televisiva commerciale americana trasmissione televisiva che è di proprietà del Disney-ABC Television Group, una controllata della divisione Disney Media Networks di The Walt Disney Company. La rete fa parte delle grandi reti televisive Big Three. La rete ha sede a Columbus Avenue e West 66th Street a Manhattan, con ulteriori uffici e stabilimenti di produzione a New York City, Los Angeles e Burbank, California. Risposta: Manhattan"
- text: "La disobbedienza civile non rivoluzionaria è una semplice disobbedienza delle leggi sulla base del fatto che sono giudicate \"sbagliate\" da una coscienza individuale, o come parte di uno sforzo per rendere alcune leggi inefficaci, per causarne l' abrogazione, o per esercitare pressioni per ottenere i propri desideri politici su qualche altra questione. La disobbedienza civile rivoluzionaria è più che altro un tentativo attivo di rovesciare un governo (o di cambiare le tradizioni culturali, i costumi sociali, le credenze religiose, ecc. La rivoluzione non deve necessariamente essere politica, cioè \"rivoluzione culturale\", implica semplicemente un cambiamento radicale e diffuso in una sezione del tessuto sociale). Gli atti di Gandhi sono stati descritti come disobbedienza civile rivoluzionaria. È stato affermato che gli ungheresi sotto Ferenc Deák hanno diretto una disobbedienza civile rivoluzionaria contro il governo austriaco. Thoreau ha anche scritto di disobbedienza civile realizzando \"rivoluzione pacifica\". Howard Zinn, Harvey Wheeler e altri hanno identificato il diritto sposato nella Dichiarazione d' Indipendenza di \"alterare o abolire\" un governo ingiusto come principio di disobbedienza civile. Risposta: Ferenc Deák"
metrics:
- rouge
- bertscore
model-index:
- name: it5-base-question-generation
results:
- task:
type: question-generation
name: "Question generation"
dataset:
type: squad_it
name: "SQuAD-IT"
metrics:
- type: rouge1
value: 0.382
name: "Test Rouge1"
- type: rouge2
value: 0.199
name: "Test Rouge2"
- type: rougeL
value: 0.354
name: "Test RougeL"
- type: bertscore
value: 0.516
name: "Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
co2_eq_emissions:
emissions: "17g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Base for Question Generation 💭 🇮🇹
This repository contains the checkpoint for the [IT5 Base](https://huggingface.co/gsarti/it5-base) model fine-tuned on question generation on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
qg = pipeline("text2text-generation", model='it5/it5-base-question-generation')
qg("Le conoscenze mediche erano stagnanti durante il Medioevo. Il resoconto più autorevole di allora è venuto dalla facoltà di medicina di Parigi in un rapporto al re di Francia che ha incolpato i cieli, sotto forma di una congiunzione di tre pianeti nel 1345 che causò una "grande pestilenza nell\' aria". Questa relazione è diventata la prima e più diffusa di una serie di casi di peste che cercava di dare consigli ai malati. Che la peste fosse causata dalla cattiva aria divenne la teoria più accettata. Oggi, questo è conosciuto come la teoria di Miasma. La parola "peste" non aveva un significato particolare in questo momento, e solo la ricorrenza dei focolai durante il Medioevo gli diede il nome che è diventato il termine medico. Risposta: re di Francia")
>>> [{"generated_text": "Per chi è stato redatto il referto medico?"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-question-generation")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-question-generation")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
it5/it5-large-wiki-summarization | 9c43108f75e1f89da8fd2baf4ba00850104c7ec3 | 2022-03-09T07:49:56.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:wits",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"wikipedia",
"summarization",
"wits",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
]
| summarization | false | it5 | null | it5/it5-large-wiki-summarization | 8 | null | transformers | 13,128 | ---
language:
- it
license: apache-2.0
datasets:
- wits
tags:
- italian
- sequence-to-sequence
- wikipedia
- summarization
- wits
widget:
- text: "La 5ª Commissione ha competenza per i disegni di legge riguardanti le specifiche materie del bilancio, del personale e dei servizi del Ministero dell'economia, nonché per i disegni di legge riguardanti la materia finanziaria. La Commissione è composta da 26 senatori (di cui 2 segretari, 2 vicepresidenti di cui 1 componente esterno, e un presidente) scelti in modo omogeneo tra i componenti di quel ramo del Parlamento, in modo da rispecchiarne le forze politiche presenti. Essi sono scelti dai gruppi parlamentari (e non dal Presidente, come invece accade per l'organismo della Giunta parlamentare): per la nomina dei membri ciascun Gruppo, entro cinque giorni dalla propria costituzione, procede, dandone comunicazione alla Presidenza del Senato, alla designazione dei propri rappresentanti nelle singole Commissioni permanenti. Ogni senatore chiamato a far parte del governo o eletto presidente della Commissione è, per la durata della carica, sostituito dal suo gruppo nella Commissione con un altro senatore, che continuerà ad appartenere anche alla Commissione di provenienza. Tranne in rari casi nessun Senatore può essere assegnato a più di una Commissione permanente. Le Commissioni permanenti sono rinnovate dopo il primo biennio della legislatura ed i loro componenti possono essere confermati."
- text: "Interni della chiesa Si pensa che già ai tempi di Gediminas vi fosse una piccola chiesa, probabilmente in legno. Nel 1408 circa Vitoldo costruì la chiesa dello Spirito Santo che andò in seguito ampliata. Nel 1501 Alessandro Jagellone lo donò al monastero domenicano, il più antico della Lituania, che nel 1679-88 fu ampliato e ricostruito. Di quel periodo sopravvivono le mura della chiesa, mentre l'arredamento interno fu realizzato nel 1749-1770 e la cupola affrontò dei lavori di restauro nel 1752-1760. Nel 1844 le autorità zariste chiusero il monastero e la chiesa divenne parrocchiale. Oggi serve la comunità polacca di Vilnius. Su via Šv. Ignoto fu fondato un monastero domenicano nel 1501. Come molti altri edifici, questo monastero fu convertito in una prigione dalle autorità zariste nel 1807. Costituì un luogo di prigionia per molti patrioti lituani, nello specifico i Filareti, i quali parteciparono alle rivolte del 1831 e del 1863. Organo La chiesa si trova lateralmente rispetto alla strada e non ha una facciata principale ben disegnata. L'altezza, inclusa la cupola, è di 51 m. La parte inferiore della facciata (con piccole torri gemelle) è ricoperta da edifici conventuali e l'esterno presenta caratteristiche architettoniche tipiche del tardo barocco. Celebre per i fantasiosi ornamenti rococò, l'interno della chiesa è tra i più celebri della Lituania per via dei cartigli con vari stemmi e affreschi lungo la navata: vi sono 16 altari nella chiesa. Gli altari e il pulpito sono assai decorati con sculture e ornamenti rotondi e in rilievo. Tra gli affreschi barocchi, si pensi alla composizione multi-figurale intitolata ''Apoteosi dello Spirito Santo'' (neobarocco, XIX secolo) nella cupola, 45 dipinti nella chiesa (tra cui un'immagine di Santa Barbara con un'ambientazione del XVII o XVIII secolo, una di Santa Caterina da Siena in stile rococò di Szymon Czechowicz, un ritratto di Alessandro Jagellone di un artista sconosciuto della seconda metà del XVIII secolo). Un ingresso sotto l'altare conduce alle grandi volte, labirintiche, con molte stanze e cripte: i sotterranei ospitano i resti di centinaia di residenti di Vilnius, alcuni dei quali mummificatisi naturalmente, e sono circondati da leggende metropolitane. Sebbene l'esistenza dei sotterranei fosse nota, i primi sforzi per esplorare e mappare le cripte furono abbandonate nonostante lo sforzo degli studenti dell'Università di Vilnius negli anni '30. Tuttavia, questi ultimi non avevano osservato le corrette procedure archeologiche e causarono infatti molti danni: il modus operandi prevedeva lo smistamento delle ossa ponendo tutti i teschi sugli scaffali e rimuovendoli le tombe. Da allora, i resti sono stati spostati molte volte lasciandoli in uno stato casuale e disorganizzato. Stando alle leggende che aleggiano sul luogo, i resti sarebbero di soldati francesi recatisi in città nel corso della campagna di Russia del 1812 avviata da Napoleone Bonaparte, di vittime dell'Inquisizione o della peste nera. Più romantiche risultano le affermazioni di chi sostiene che i corridoi sotterranei facevano parte di una rete di passaggi più ampia che consentiva agli amanti leggendari Barbara Radziwiłł e Sigismondo II Augusto di incontrarsi in segreto. Nel 2011, gli antropologi dell'Università di Vilnius, guidati da Rimantas Jankauskas, avviarono uno studio sui corpi mummificati, stimando settimane dopo che le volte conservassero i resti di circa 600 persone, tra cui molte donne e bambini dalla metà del XVIII secolo all'inizio del XIX secolo. Il team ha selezionato i cadaveri meglio conservati e ha eseguito la loro tomografia. I risultati mostrano che molte persone erano in sovrappeso e avevano l'alluce valgo, il che ha portato alla conclusione che si trattava di alti borghesi o comunque di cittadini abbienti. "
- text: "Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. "
- text: "Vanni ha la sua prima mostra personale nel 1948, alla Galleria Margherita di Roma. Nel 1949 vince una borsa di studio che lo porterà a studiare ad Amsterdam sotto la guida del pittore neoplastico Friedrich Vordemberge-Gildewart. Nel 1952 vince una Fulbright Scholarship che lo porterà a studiare in America, alla Yale University, sotto la guida di Josef Albers. Dal 1953 al 1960 si stabilisce a Parigi, dove illustra alcuni libri per bambini che in seguito vinceranno il premio del Club des Editeurs. Nel 1954 lavora come consulente del colore per il documentario su Picasso di Luciano Emmer, e nel 1955 comincia la sua lunga collaborazione con la Galleria Schneider, affiancando artisti come Corrado Cagli. Dal 1969 al 1974 lavora su dei bassorilievi in vetro resina sui quali vengono proiettati dei film astratti da lui creati, per creare dei quadri che si trasformino continuamente nel tempo. Nel 1979 lascia Roma per stabilirsi a New York, dove alla carriera di pittore affiancherà quella di professore per la prestigiosa Cooper Union School of Art, dove insegnerà ininterrottamente dal 1984 al 2014. L'opera pittorica di Vanni è segnata da una visione estremamente personale, lontana dalle correnti e dai movimenti che hanno caratterizzato la seconda metà del XX secolo. Memore delle lunghe conversazioni avute da Vanni nella sua primissima gioventù, con il filosofo e pittore futurista Alberto Bragaglia, le sue opere sono contrassegnate da un “eclettismo” formale programmatico, alla base del quale resta costante una conoscenza profonda delle molteplici tecniche artistiche utilizzate (tra cui il mosaico, l’affresco e la tempera ad uovo). Pur esprimendosi per lo più in cicli di opere dove l’astrazione formale è la principale componente figurativa, sono da sottolineare alcune opere dove Vanni ha dato prova di una importante padronanza dell’arte figurativa. Importanti e numerose sono le sue realizzazioni anche nel campo dell’illustrazione. Sue sono le illustrazioni per la novella ''Agostino'' di Alberto Moravia, per il libro ''Love'' di Lowell A. Siff e delle ''Contes de Cristal'' di Alice Coléno. Ha tenuto mostre personali in Italia e all’estero ed esposto in mostre collettive di rappresentanza italiana nei musei e nelle gallerie di ogni parte del mondo. "
metrics:
- rouge
- bertscore
model-index:
- name: it5-large-wiki-summarization
results:
- task:
type: wiki-summarization
name: "Wikipedia Summarization"
dataset:
type: wits
name: "WITS"
metrics:
- type: rouge1
value: 0.335
name: "Test Rouge1"
- type: rouge2
value: 0.191
name: "Test Rouge2"
- type: rougeL
value: 0.301
name: "Test RougeL"
- type: bertscore
value: 0.508
name: "Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
co2_eq_emissions:
emissions: "51g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Large for Wikipedia Summarization ✂️📑 🇮🇹
This repository contains the checkpoint for the [IT5 Large](https://huggingface.co/gsarti/it5-large) model fine-tuned on Wikipedia summarization on the [WITS](https://www.semanticscholar.org/paper/WITS%3A-Wikipedia-for-Italian-Text-Summarization-Casola-Lavelli/ad6c83122e721c7c0db4a40727dac3b4762cd2b1) dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
wikisum = pipeline("summarization", model='it5/it5-large-wiki-summarization')
wikisum("Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. ")
>>> [{"generated_text": "L' '''isola di Rabot''' si trova in prossimità dell'isola di Renaud, a sud dell'Argentina."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-large-wiki-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-large-wiki-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
it5/mt5-base-headline-generation | ff355886a9e7df72abc917caa313a30168a339df | 2022-03-09T07:58:47.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"mt5",
"text2text-generation",
"it",
"dataset:gsarti/change_it",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"newspaper",
"ilgiornale",
"repubblica",
"headline-generation",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | it5 | null | it5/mt5-base-headline-generation | 8 | null | transformers | 13,129 | ---
language:
- it
license: apache-2.0
datasets:
- gsarti/change_it
tags:
- italian
- sequence-to-sequence
- newspaper
- ilgiornale
- repubblica
- headline-generation
widget:
- text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre."
- text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990."
- text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione."
- text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"."
metrics:
- rouge
- bertscore
model-index:
- name: mt5-base-headline-generation
results:
- task:
type: headline-generation
name: "Headline generation"
dataset:
type: headgen_it
name: "HeadGen-IT"
metrics:
- type: rouge1
value: 0.302
name: "Test Rouge1"
- type: rouge2
value: 0.109
name: "Test Rouge2"
- type: rougeL
value: 0.265
name: "Test RougeL"
- type: bertscore
value: 0.427
name: "Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
co2_eq_emissions:
emissions: "40g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# mT5 Base for News Headline Generation 📣 🇮🇹
This repository contains the checkpoint for the [mT5 Base](https://huggingface.co/google/mt5-base) model fine-tuned on news headline generation on the Italian HeadGen-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
hg = pipeline("text2text-generation", model='it5/mt5-base-headline-generation')
hg("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".")
>>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/mt5-base-headline-generation")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-base-headline-generation")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
it5/mt5-base-wiki-summarization | f8b63180006bbc748625c64f6559d9565d469ad6 | 2022-03-09T07:51:31.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"mt5",
"text2text-generation",
"it",
"dataset:wits",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"wikipedia",
"summarization",
"wits",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
]
| summarization | false | it5 | null | it5/mt5-base-wiki-summarization | 8 | null | transformers | 13,130 | ---
language:
- it
license: apache-2.0
datasets:
- wits
tags:
- italian
- sequence-to-sequence
- wikipedia
- summarization
- wits
widget:
- text: "La 5ª Commissione ha competenza per i disegni di legge riguardanti le specifiche materie del bilancio, del personale e dei servizi del Ministero dell'economia, nonché per i disegni di legge riguardanti la materia finanziaria. La Commissione è composta da 26 senatori (di cui 2 segretari, 2 vicepresidenti di cui 1 componente esterno, e un presidente) scelti in modo omogeneo tra i componenti di quel ramo del Parlamento, in modo da rispecchiarne le forze politiche presenti. Essi sono scelti dai gruppi parlamentari (e non dal Presidente, come invece accade per l'organismo della Giunta parlamentare): per la nomina dei membri ciascun Gruppo, entro cinque giorni dalla propria costituzione, procede, dandone comunicazione alla Presidenza del Senato, alla designazione dei propri rappresentanti nelle singole Commissioni permanenti. Ogni senatore chiamato a far parte del governo o eletto presidente della Commissione è, per la durata della carica, sostituito dal suo gruppo nella Commissione con un altro senatore, che continuerà ad appartenere anche alla Commissione di provenienza. Tranne in rari casi nessun Senatore può essere assegnato a più di una Commissione permanente. Le Commissioni permanenti sono rinnovate dopo il primo biennio della legislatura ed i loro componenti possono essere confermati."
- text: "Interni della chiesa Si pensa che già ai tempi di Gediminas vi fosse una piccola chiesa, probabilmente in legno. Nel 1408 circa Vitoldo costruì la chiesa dello Spirito Santo che andò in seguito ampliata. Nel 1501 Alessandro Jagellone lo donò al monastero domenicano, il più antico della Lituania, che nel 1679-88 fu ampliato e ricostruito. Di quel periodo sopravvivono le mura della chiesa, mentre l'arredamento interno fu realizzato nel 1749-1770 e la cupola affrontò dei lavori di restauro nel 1752-1760. Nel 1844 le autorità zariste chiusero il monastero e la chiesa divenne parrocchiale. Oggi serve la comunità polacca di Vilnius. Su via Šv. Ignoto fu fondato un monastero domenicano nel 1501. Come molti altri edifici, questo monastero fu convertito in una prigione dalle autorità zariste nel 1807. Costituì un luogo di prigionia per molti patrioti lituani, nello specifico i Filareti, i quali parteciparono alle rivolte del 1831 e del 1863. Organo La chiesa si trova lateralmente rispetto alla strada e non ha una facciata principale ben disegnata. L'altezza, inclusa la cupola, è di 51 m. La parte inferiore della facciata (con piccole torri gemelle) è ricoperta da edifici conventuali e l'esterno presenta caratteristiche architettoniche tipiche del tardo barocco. Celebre per i fantasiosi ornamenti rococò, l'interno della chiesa è tra i più celebri della Lituania per via dei cartigli con vari stemmi e affreschi lungo la navata: vi sono 16 altari nella chiesa. Gli altari e il pulpito sono assai decorati con sculture e ornamenti rotondi e in rilievo. Tra gli affreschi barocchi, si pensi alla composizione multi-figurale intitolata ''Apoteosi dello Spirito Santo'' (neobarocco, XIX secolo) nella cupola, 45 dipinti nella chiesa (tra cui un'immagine di Santa Barbara con un'ambientazione del XVII o XVIII secolo, una di Santa Caterina da Siena in stile rococò di Szymon Czechowicz, un ritratto di Alessandro Jagellone di un artista sconosciuto della seconda metà del XVIII secolo). Un ingresso sotto l'altare conduce alle grandi volte, labirintiche, con molte stanze e cripte: i sotterranei ospitano i resti di centinaia di residenti di Vilnius, alcuni dei quali mummificatisi naturalmente, e sono circondati da leggende metropolitane. Sebbene l'esistenza dei sotterranei fosse nota, i primi sforzi per esplorare e mappare le cripte furono abbandonate nonostante lo sforzo degli studenti dell'Università di Vilnius negli anni '30. Tuttavia, questi ultimi non avevano osservato le corrette procedure archeologiche e causarono infatti molti danni: il modus operandi prevedeva lo smistamento delle ossa ponendo tutti i teschi sugli scaffali e rimuovendoli le tombe. Da allora, i resti sono stati spostati molte volte lasciandoli in uno stato casuale e disorganizzato. Stando alle leggende che aleggiano sul luogo, i resti sarebbero di soldati francesi recatisi in città nel corso della campagna di Russia del 1812 avviata da Napoleone Bonaparte, di vittime dell'Inquisizione o della peste nera. Più romantiche risultano le affermazioni di chi sostiene che i corridoi sotterranei facevano parte di una rete di passaggi più ampia che consentiva agli amanti leggendari Barbara Radziwiłł e Sigismondo II Augusto di incontrarsi in segreto. Nel 2011, gli antropologi dell'Università di Vilnius, guidati da Rimantas Jankauskas, avviarono uno studio sui corpi mummificati, stimando settimane dopo che le volte conservassero i resti di circa 600 persone, tra cui molte donne e bambini dalla metà del XVIII secolo all'inizio del XIX secolo. Il team ha selezionato i cadaveri meglio conservati e ha eseguito la loro tomografia. I risultati mostrano che molte persone erano in sovrappeso e avevano l'alluce valgo, il che ha portato alla conclusione che si trattava di alti borghesi o comunque di cittadini abbienti. "
- text: "Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. "
- text: "Vanni ha la sua prima mostra personale nel 1948, alla Galleria Margherita di Roma. Nel 1949 vince una borsa di studio che lo porterà a studiare ad Amsterdam sotto la guida del pittore neoplastico Friedrich Vordemberge-Gildewart. Nel 1952 vince una Fulbright Scholarship che lo porterà a studiare in America, alla Yale University, sotto la guida di Josef Albers. Dal 1953 al 1960 si stabilisce a Parigi, dove illustra alcuni libri per bambini che in seguito vinceranno il premio del Club des Editeurs. Nel 1954 lavora come consulente del colore per il documentario su Picasso di Luciano Emmer, e nel 1955 comincia la sua lunga collaborazione con la Galleria Schneider, affiancando artisti come Corrado Cagli. Dal 1969 al 1974 lavora su dei bassorilievi in vetro resina sui quali vengono proiettati dei film astratti da lui creati, per creare dei quadri che si trasformino continuamente nel tempo. Nel 1979 lascia Roma per stabilirsi a New York, dove alla carriera di pittore affiancherà quella di professore per la prestigiosa Cooper Union School of Art, dove insegnerà ininterrottamente dal 1984 al 2014. L'opera pittorica di Vanni è segnata da una visione estremamente personale, lontana dalle correnti e dai movimenti che hanno caratterizzato la seconda metà del XX secolo. Memore delle lunghe conversazioni avute da Vanni nella sua primissima gioventù, con il filosofo e pittore futurista Alberto Bragaglia, le sue opere sono contrassegnate da un “eclettismo” formale programmatico, alla base del quale resta costante una conoscenza profonda delle molteplici tecniche artistiche utilizzate (tra cui il mosaico, l’affresco e la tempera ad uovo). Pur esprimendosi per lo più in cicli di opere dove l’astrazione formale è la principale componente figurativa, sono da sottolineare alcune opere dove Vanni ha dato prova di una importante padronanza dell’arte figurativa. Importanti e numerose sono le sue realizzazioni anche nel campo dell’illustrazione. Sue sono le illustrazioni per la novella ''Agostino'' di Alberto Moravia, per il libro ''Love'' di Lowell A. Siff e delle ''Contes de Cristal'' di Alice Coléno. Ha tenuto mostre personali in Italia e all’estero ed esposto in mostre collettive di rappresentanza italiana nei musei e nelle gallerie di ogni parte del mondo. "
metrics:
- rouge
- bertscore
model-index:
- name: mt5-base-wiki-summarization
results:
- task:
type: wiki-summarization
name: "Wikipedia Summarization"
dataset:
type: wits
name: "WITS"
metrics:
- type: rouge1
value: 0.348
name: "Test Rouge1"
- type: rouge2
value: 0.200
name: "Test Rouge2"
- type: rougeL
value: 0.315
name: "Test RougeL"
- type: bertscore
value: 0.520
name: "Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
co2_eq_emissions:
emissions: "40g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# mT5 Base for Wikipedia Summarization ✂️📑 🇮🇹
This repository contains the checkpoint for the [mT5 Base](https://huggingface.co/google/mt5-base) model fine-tuned on Wikipedia summarization on the [WITS](https://www.semanticscholar.org/paper/WITS%3A-Wikipedia-for-Italian-Text-Summarization-Casola-Lavelli/ad6c83122e721c7c0db4a40727dac3b4762cd2b1) dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
wikisum = pipeline("summarization", model='it5/mt5-base-wiki-summarization')
wikisum("Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. ")
>>> [{"generated_text": "L' '''isola di Rabot''' si trova in prossimità dell'isola di Renaud, a sud dell'Argentina."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/mt5-base-wiki-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-base-wiki-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
izumi-lab/electra-small-paper-japanese-fin-generator | 6da6171bd9f9f18a61a3e8c891571fc50686bb43 | 2022-03-19T09:40:35.000Z | [
"pytorch",
"electra",
"fill-mask",
"ja",
"dataset:wikipedia",
"dataset:securities reports",
"dataset:summaries of financial results",
"arxiv:2003.10555",
"transformers",
"finance",
"license:cc-by-sa-4.0",
"autotrain_compatible"
]
| fill-mask | false | izumi-lab | null | izumi-lab/electra-small-paper-japanese-fin-generator | 8 | null | transformers | 13,131 | ---
language: ja
license: cc-by-sa-4.0
tags:
- finance
datasets:
- wikipedia
- securities reports
- summaries of financial results
widget:
- text: 流動[MASK]は1億円となりました。
---
# ELECTRA small Japanese finance generator
This is a [ELECTRA](https://github.com/google-research/electra) model pretrained on texts in the Japanese language.
The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0).
## Model architecture
The model architecture is the same as ELECTRA small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 12 layers, 64 dimensions of hidden states, and 1 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Japanese version of Wikipedia, using Wikipedia dump file as of June 1, 2021.
The Wikipedia corpus file is 2.9GB, consisting of approximately 20M sentences.
The financial corpus consists of 2 corpora:
- Summaries of financial results from October 9, 2012, to December 31, 2020
- Securities reports from February 8, 2018, to December 31, 2020
The financial corpus file is 5.2GB, consisting of approximately 27M sentences.
## Tokenization
The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
## Training
The models are trained with the same configuration as ELECTRA small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 128 tokens per instance, 128 instances per batch, and 1M training steps.
The size of the generator is 1/4 of the size of the discriminator.
## Citation
**There will be another paper for this pretrained model. Be sure to check here again when you cite.**
```
@inproceedings{suzuki2021fin-bert-electra,
title={金融文書を用いた事前学習言語モデルの構築と検証},
% title={Construction and Validation of a Pre-Trained Language Model Using Financial Documents},
author={鈴木 雅弘 and 坂地 泰紀 and 平野 正徳 and 和泉 潔},
% author={Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
booktitle={人工知能学会第27回金融情報学研究会(SIG-FIN)},
% booktitle={Proceedings of JSAI Special Interest Group on Financial Infomatics (SIG-FIN) 27},
pages={5-10},
year={2021}
}
```
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP21K12010.
|
jaesun/dpr-bert-model | a7d653a79c770cb69ece8830d9e86d8c42fe923d | 2022-02-16T18:18:10.000Z | [
"pytorch",
"dpr",
"transformers"
]
| null | false | jaesun | null | jaesun/dpr-bert-model | 8 | null | transformers | 13,132 | Entry not found |
jannesg/bertsson | 45b653d8a15fb64c64e3e3c018d7d01e01429553 | 2021-05-19T20:36:10.000Z | [
"pytorch",
"jax",
"bert",
"sv",
"transformers"
]
| null | false | jannesg | null | jannesg/bertsson | 8 | null | transformers | 13,133 | ---
language: sv
---
# BERTSSON Models
The models are trained on:
- Government Text
- Swedish Literature
- Swedish News
Corpus size: Roughly 6B tokens.
The following models are currently available:
- **bertsson** - A BERT base model trained with the same hyperparameters as first published by Google.
All models are cased and trained with whole word masking.
Stay tuned for evaluations.
|
jannesg/takalane_ssw_roberta | d82d520766ae546e67a03e4251b48b0f9564d389 | 2021-09-22T08:52:08.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"tn",
"transformers",
"masked-lm",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | jannesg | null | jannesg/takalane_ssw_roberta | 8 | null | transformers | 13,134 | ---
language:
- tn
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- tn
- fill-mask
- pytorch
- roberta
- masked-lm
license: mit
---
# Takalani Sesame - Tswana 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_ssw_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_ssw_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 380
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
jcblaise/electra-tagalog-base-uncased-generator | a112830e18a71bbcf8dc6d795aee2f383ba461f2 | 2021-11-11T06:19:05.000Z | [
"pytorch",
"electra",
"fill-mask",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0",
"autotrain_compatible"
]
| fill-mask | false | jcblaise | null | jcblaise/electra-tagalog-base-uncased-generator | 8 | null | transformers | 13,135 | ---
language: tl
tags:
- electra
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# ELECTRA Tagalog Base Uncased Generator
Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This is the generator model used to sample synthetic text and pretrain the discriminator. Only use this model for retraining and mask-filling. For the actual model for downstream tasks, please refer to the discriminator models.
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@inproceedings{cruz2021exploiting,
title={Exploiting News Article Structure for Automatic Corpus Generation of Entailment Datasets},
author={Cruz, Jan Christian Blaise and Resabal, Jose Kristian and Lin, James and Velasco, Dan John and Cheng, Charibeth},
booktitle={Pacific Rim International Conference on Artificial Intelligence},
pages={86--99},
year={2021},
organization={Springer}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jgammack/distilbert-base-mean-pooling | 4db12058d9948879a8e39cf49a67423af8a04537 | 2022-02-11T15:49:11.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | jgammack | null | jgammack/distilbert-base-mean-pooling | 8 | null | sentence-transformers | 13,136 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# jgammack/distilbert-base-mean-pooling
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jgammack/distilbert-base-mean-pooling')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jgammack/distilbert-base-mean-pooling')
model = AutoModel.from_pretrained('jgammack/distilbert-base-mean-pooling')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jgammack/distilbert-base-mean-pooling)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ji-xin/roberta_base-MRPC-two_stage | 06424964347961768598bdf64dcbfb778079577f | 2021-05-20T17:13:04.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | ji-xin | null | ji-xin/roberta_base-MRPC-two_stage | 8 | null | transformers | 13,137 | Entry not found |
jimregan/bert-base-irish-cased-v1-finetuned-ner | d9b6130577e11f31c5209528ced5d6e0812ec33b | 2021-12-01T19:14:22.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"ga",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"irish",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | jimregan | null | jimregan/bert-base-irish-cased-v1-finetuned-ner | 8 | null | transformers | 13,138 | ---
license: apache-2.0
language: ga
tags:
- generated_from_trainer
- irish
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-irish-cased-v1-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: ga
metrics:
- name: Precision
type: precision
value: 0.8190601668862538
- name: Recall
type: recall
value: 0.8363228699551569
- name: F1
type: f1
value: 0.8276015087641446
- name: Accuracy
type: accuracy
value: 0.9306559069156423
widget:
- text: "Saolaíodh Pádraic Ó Conaire i nGaillimh sa bhliain 1882."
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-irish-cased-v1-finetuned-ner
This model is a fine-tuned version of [DCU-NLP/bert-base-irish-cased-v1](https://huggingface.co/DCU-NLP/bert-base-irish-cased-v1) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2468
- Precision: 0.8191
- Recall: 0.8363
- F1: 0.8276
- Accuracy: 0.9307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 63 | 0.4902 | 0.5579 | 0.5269 | 0.5420 | 0.8458 |
| No log | 2.0 | 126 | 0.3227 | 0.7169 | 0.7417 | 0.7291 | 0.8991 |
| No log | 3.0 | 189 | 0.2720 | 0.7895 | 0.7839 | 0.7867 | 0.9186 |
| No log | 4.0 | 252 | 0.2585 | 0.8128 | 0.8296 | 0.8211 | 0.9264 |
| No log | 5.0 | 315 | 0.2468 | 0.8191 | 0.8363 | 0.8276 | 0.9307 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jinmang2/bert-base-ko-kornli | 570ab3f6c1508a151ecd4f3da4e1ce2ae6c365ba | 2021-07-09T14:01:40.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | jinmang2 | null | jinmang2/bert-base-ko-kornli | 8 | null | transformers | 13,139 | Entry not found |
jwuthri/autonlp-shipping_status_2-27366103 | 67ed057eab0b38ea6711ada895a9447f292ed5e0 | 2021-10-27T21:34:42.000Z | [
"pytorch",
"distilbert",
"text-classification",
"unk",
"dataset:jwuthri/autonlp-data-shipping_status_2",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | jwuthri | null | jwuthri/autonlp-shipping_status_2-27366103 | 8 | null | transformers | 13,140 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- jwuthri/autonlp-data-shipping_status_2
co2_eq_emissions: 32.912881644048
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 27366103
- CO2 Emissions (in grams): 32.912881644048
## Validation Metrics
- Loss: 0.18175844848155975
- Accuracy: 0.9437683592110785
- Precision: 0.9416809605488851
- Recall: 0.8459167950693375
- AUC: 0.9815242330050846
- F1: 0.8912337662337663
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/jwuthri/autonlp-shipping_status_2-27366103
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("jwuthri/autonlp-shipping_status_2-27366103", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("jwuthri/autonlp-shipping_status_2-27366103", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
kapilchauhan/bert-base-uncased-CoLA-finetuned-cola | 4735c8a214a6f6385c3f2109ba8698d1e7b5b83b | 2022-02-24T19:00:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | kapilchauhan | null | kapilchauhan/bert-base-uncased-CoLA-finetuned-cola | 8 | null | transformers | 13,141 | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-CoLA-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5755298089385917
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-CoLA-finetuned-cola
This model is a fine-tuned version of [textattack/bert-base-uncased-CoLA](https://huggingface.co/textattack/bert-base-uncased-CoLA) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8318
- Matthews Correlation: 0.5755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.2949 | 1.0 | 535 | 0.5742 | 0.5219 |
| 0.1852 | 2.0 | 1070 | 0.7226 | 0.5573 |
| 0.1196 | 3.0 | 1605 | 0.8318 | 0.5755 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
kapilchauhan/distilbert-base-uncased-CoLA-finetuned-cola | 4b08a2cae5b94a5d157f6c611a2851e501f4ebf3 | 2022-02-24T19:54:55.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | kapilchauhan | null | kapilchauhan/distilbert-base-uncased-CoLA-finetuned-cola | 8 | null | transformers | 13,142 | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-CoLA-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5689051637185746
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-CoLA-finetuned-cola
This model is a fine-tuned version of [textattack/distilbert-base-uncased-CoLA](https://huggingface.co/textattack/distilbert-base-uncased-CoLA) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6996
- Matthews Correlation: 0.5689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 134 | 0.6061 | 0.5074 |
| No log | 2.0 | 268 | 0.5808 | 0.5652 |
| No log | 3.0 | 402 | 0.6996 | 0.5689 |
| 0.0952 | 4.0 | 536 | 0.8249 | 0.5385 |
| 0.0952 | 5.0 | 670 | 0.8714 | 0.5567 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
keshan/sinhala-gpt2-newswire | 11de99e2d1c3bcb30c11e454f247d5b658514f56 | 2021-07-16T15:46:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"si",
"transformers",
"sinhala"
]
| text-generation | false | keshan | null | keshan/sinhala-gpt2-newswire | 8 | null | transformers | 13,143 | ---
language: si
tags:
- sinhala
- gpt2
pipeline_tag: text-generation
widget:
- text: "මම"
---
This is a finetunes version of keshan/sinhala-gpt2 with newswire articles. This was finetuned on ~12MB of data
- Num examples=8395
- Batch size =8
It got a Perplexity of 3.15 |
kingabzpro/wav2vec2-large-xlsr-53-punjabi | 93ea753e3ff04a80b7c6fc67b6f4255b0a2cec28 | 2022-03-23T18:28:00.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"pa-IN",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | kingabzpro | null | kingabzpro/wav2vec2-large-xlsr-53-punjabi | 8 | 2 | transformers | 13,144 | ---
language:
- pa-IN
license: apache-2.0
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
- cer
model-index:
- name: wav2vec2-punjabi-V8-Abid
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice pa-IN
args: pa-IN
metrics:
- type: wer
value: 36.02
name: Test WER With LM
- type: cer
value: 12.81
name: Test CER With LM
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-punjabi
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-punjabi-pam-10](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-punjabi-pam-10) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2101
- Wer: 0.4939
- Cer: 0.2238
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id kingabzpro/wav2vec2-large-xlsr-53-punjabi --dataset mozilla-foundation/common_voice_8_0 --config pa-IN --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "kingabzpro/wav2vec2-large-xlsr-53-punjabi"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "pa-IN", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 11.0563 | 3.7 | 100 | 1.9492 | 0.7123 | 0.3872 |
| 1.6715 | 7.41 | 200 | 1.3142 | 0.6433 | 0.3086 |
| 0.9117 | 11.11 | 300 | 1.2733 | 0.5657 | 0.2627 |
| 0.666 | 14.81 | 400 | 1.2730 | 0.5598 | 0.2534 |
| 0.4225 | 18.52 | 500 | 1.2548 | 0.5300 | 0.2399 |
| 0.3209 | 22.22 | 600 | 1.2166 | 0.5229 | 0.2372 |
| 0.2678 | 25.93 | 700 | 1.1795 | 0.5041 | 0.2276 |
| 0.2088 | 29.63 | 800 | 1.2101 | 0.4939 | 0.2238 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
kuzgunlar/electra-turkish-ner | 44b6abf5f47415e603aeaa2247688bc5ff17fdc5 | 2020-07-31T08:55:28.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | kuzgunlar | null | kuzgunlar/electra-turkish-ner | 8 | 1 | transformers | 13,145 | Entry not found |
lalopey/saeed | 7aaaeca4298c220d52cb5459f809c2a0f1fab206 | 2021-05-23T06:27:31.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | lalopey | null | lalopey/saeed | 8 | null | transformers | 13,146 | Entry not found |
lewtun/bert-base-japanese-char-v2-finetuned-amazon-jap | f652ef6953192d8c50982176d2563293c85f650d | 2021-10-01T14:35:30.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | lewtun | null | lewtun/bert-base-japanese-char-v2-finetuned-amazon-jap | 8 | null | transformers | 13,147 | Entry not found |
lewtun/results | 6c04f481e033451b924351337e994cfb73950aaa | 2021-10-18T13:16:42.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | lewtun | null | lewtun/results | 8 | null | transformers | 13,148 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: results
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9251012149383893
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2147
- Accuracy: 0.925
- F1: 0.9251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8221 | 1.0 | 250 | 0.3106 | 0.9125 | 0.9102 |
| 0.2537 | 2.0 | 500 | 0.2147 | 0.925 | 0.9251 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.13.0
- Tokenizers 0.10.3
|
lfcc/bert-base-pt-archive | db870618b1d60f2c46d616595f64f1f7d62ebb07 | 2022-01-18T17:19:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
]
| token-classification | false | lfcc | null | lfcc/bert-base-pt-archive | 8 | null | transformers | 13,149 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: bert-base-pt-archive
results:
- task:
name: Token Classification
type: token-classification
metric:
name: Accuracy
type: accuracy
value: 0.9700325118974698
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-pt-archive
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1140
- Precision: 0.9147
- Recall: 0.9483
- F1: 0.9312
- Accuracy: 0.9700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 192 | 0.1438 | 0.8917 | 0.9392 | 0.9148 | 0.9633 |
| 0.2454 | 2.0 | 384 | 0.1222 | 0.8985 | 0.9417 | 0.9196 | 0.9671 |
| 0.0526 | 3.0 | 576 | 0.1098 | 0.9150 | 0.9481 | 0.9312 | 0.9698 |
| 0.0372 | 4.0 | 768 | 0.1140 | 0.9147 | 0.9483 | 0.9312 | 0.9700 |
### Framework versions
- Transformers 4.10.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.10.2
- Tokenizers 0.10.3
|
lgris/bp-voxforge1-xlsr | 58cb7fc6190d3fe1e0f550a2c71fbc858d0a7888 | 2021-11-27T21:14:32.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"transformers",
"audio",
"speech",
"portuguese-speech-corpus",
"PyTorch",
"license:apache-2.0"
]
| automatic-speech-recognition | false | lgris | null | lgris/bp-voxforge1-xlsr | 8 | null | transformers | 13,150 | ---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
---
# voxforge1-xlsr: Wav2vec 2.0 with VoxForge Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [VoxForge](http://www.voxforge.org/) dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | | -- | 5.4h |
| Common Voice | | -- | 9.5h |
| LaPS BM | | -- | 0.1h |
| MLS | | -- | 3.7h |
| Multilingual TEDx (Portuguese) | | -- | 1.8h |
| SID | | -- | 1.0h |
| VoxForge | 3.9h | -- | 0.1h |
| Total | 3.9h | -- | 21.6h |
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| voxforge\_1 (demonstration below) | 0.468 | 0.608 | 0.503 | 0.505 | 0.717 | 0.731 | 0.561 | 0.584 |
| voxforge\_1 + 4-gram (demonstration below) | 0.322 | 0.471 | 0.356 | 0.378 | 0.586 | 0.637 | 0.428 | 0.454 |
## Demonstration
```python
MODEL_NAME = "lgris/voxforge1-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.4684840205331983
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.6080167359840954
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.5037468434343434
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.505595213971485
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.7177723323755854
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.7309431974873112
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.5613906926406929
### Tests with LM
```python
# !find -type f -name "*.wav" -delete
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.32184971297675896
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.4707820098981609
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.356227904040404
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.3786376653384398
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.5864959640811857
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.6368727228726417
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.4279924242424241
|
liangxiaoxiao/bert_cn_finetuning | a87ca284e6f50db8a6f91c0862395673bb09756f | 2021-05-19T22:00:27.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | liangxiaoxiao | null | liangxiaoxiao/bert_cn_finetuning | 8 | null | transformers | 13,151 | Entry not found |
limjiayi/bert-hateful-memes-expanded | 87c19e75ba73ee9a2fae78a8ed04073744d7ecef | 2021-12-04T04:38:38.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | limjiayi | null | limjiayi/bert-hateful-memes-expanded | 8 | null | transformers | 13,152 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-hateful-memes-expanded
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-hateful-memes-expanded
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on texts from the following datasets:
- [Hateful Memes](https://hatefulmemeschallenge.com/), `train`, `dev_seen` and `dev_unseen`
- [HarMeme](https://github.com/di-dimitrov/harmeme), `train`, `val` and `test`
- [MultiOFF](https://github.com/bharathichezhiyan/Multimodal-Meme-Classification-Identifying-Offensive-Content-in-Image-and-Text), `Training`, `Validation` and `Testing`
It achieves the following results on the evaluation set:
- Loss: 3.7600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.11.0
- Pytorch 1.8.1+cu102
- Datasets 1.8.0
- Tokenizers 0.10.2
|
lincoln/2021twitchfr-conv-bert-small | 9e69c28e74095161070c16485989cb7857bcc93d | 2022-01-07T15:25:20.000Z | [
"pytorch",
"tf",
"tensorboard",
"convbert",
"feature-extraction",
"fr",
"transformers",
"twitch",
"license:mit"
]
| feature-extraction | false | lincoln | null | lincoln/2021twitchfr-conv-bert-small | 8 | null | transformers | 13,153 | ---
language:
- fr
license: mit
pipeline_tag: "feature-extraction"
widget:
- text: LUL +1 xD La Fronce !
tags:
- feature-extraction
- convbert
- twitch
---
## Modèle de langue sur les données Twitch FR
L'expérimentation menée au sein de Lincoln avait pour principal objectif de mettre en œuvre des techniques NLP from scratch sur un corpus de messages issus d’un chat Twitch. Ces derniers sont exprimés en français, mais sur une plateforme internet avec le vocabulaire internet que cela implique (fautes, vocabulaire communautaires, abréviations, anglicisme, emotes, ...).
Nos contraintes sont celles d’une entreprise n’ayant pas une volumétrie excessive de données et une puissance infinie de calcul.
Il a été nécessaire de construire un nouveau tokenizer afin de mieux correspondre à notre corpus plutôt qu’un tokenizer français existant.
Note corpus étant faible en volumétrie par rapport aux données habituelles pour entrainer un modèle BERT, nous avons opté pour l’entrainement d’un modèle dit « small ». Et il a été montré dans la littérature qu’un corpus de quelques giga octets peut donner de bons résultats, c’est pourquoi nous avons continué avec notre corpus.
La limite de la puissance de calcul a été contourné à l’aide d’une nouvelle architecture d’apprentissage basée sur un double modèle générateur / discriminateur.
Ceci nous a permis d’entrainer un modèle de langue ConvBERT sur nos données, ainsi qu’un modèle de masking en quelques heures sur une carte GPU V100.
_Nous garantissons pas la stabilité du modèle sur le long terme. Modèle réalisé dans le cadre d'un POC._
## Données
| Streamer | Nbr de messages | Categories notables en 2021 |
| --------------------------------------------- | --------------- | ---------------------------------- |
| Ponce | 2 604 935 | Chatting/Mario Kart/FIFA |
| Domingo | 1 209 703 | Chatting/talk-shows/FM2O21 |
| Mistermv | 1 205 882 | Isaac/Special events/TFT |
| Zerator | 900 894 | New World/WOW/Valorant |
| Blitzstream | 821 585 | Chess |
| Squeezie | 602 148 | Chatting / Minecraft |
| Antoinedaniellive | 548 497 | Geoguessr |
| Jeanmassietaccropolis/jeanmassiet | 301 387 | Talk-shows/chatting/special events |
| Samueletienne | 215 956 | chatting |
Sur la période du 12/03/2021 au 22/07/2021. La totalité des messages comptent 9 410 987 messages sur ces neufs streamers. Ces messages sont issus du canal IRC, donc n’ont pas subi de modération
Les données d'entrainement sont basé sur le format d'entrainement du modèle ELECTRA. Cela nécessite de formater les données en paragraphe, séparés par phrase. Nous avons choisi de regrouper les messages dans une fenêtre de 60 secondes, faisant office de paragraphe, avec les conditions suivantes :
* Longueur supérieure à 170 (ce qui représente en moyenne 50 tokens) afin de ne pas créer des instances ayant pas d’information car majoritairement vide : un padding sera nécessaire et pénalise la vitesse d’apprentissage.
* 128 tokens maximums (défaut)
Si la longueur maximale est atteinte, une deuxième instance est créée. Au final, la volumétrie d'instance d'entrainement est de 554 974.
## Application
Voir github public [lincoln/twitchatds](https://github.com/Lincoln-France/twitchatds) pour les détails d'implémentation et les résultats.
## Remarques
* Expérimentation ponctuelle
* Les métriques d'entrainement sont disponibles dans l'onglet _Training metrics_
* Pour une meilleure stabilité, les données doivent être plus hétérogènes et volumineuse. Le modèle doit être entrainé + de 24h.
## Usage
```python
from transformers import AutoTokenizer, ConvBertModel
from transformers import FeatureExtractionPipeline
model_name = 'lincoln/2021twitchfr-conv-bert-small'
loaded_tokenizer = AutoTokenizer.from_pretrained(model_name)
loaded_model = ConvBertModel.from_pretrained(model_name)
nlp = FeatureExtractionPipeline(model=loaded_model, tokenizer=loaded_tokenizer)
nlp("<3 <3 les modos")
```
## Modèles:
* [2021twitchfr-conv-bert-small](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small)
* [2021twitchfr-conv-bert-small-mlm](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small-mlm)
* [2021twitchfr-conv-bert-small-mlm-simcse](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small-mlm-simcse)
|
loubau/WoBERT | ba95e26c14c80c35dae6a4f20eb35045fd9ac4ef | 2021-10-13T09:48:46.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | loubau | null | loubau/WoBERT | 8 | null | transformers | 13,154 | Entry not found |
madlag/bert-base-uncased-squad1.1-block-sparse-0.07-v1 | 885d026e7d5b098c4b477cb7923a0c484c37a8cf | 2021-05-19T22:31:59.000Z | [
"pytorch",
"tf",
"bert",
"question-answering",
"en",
"dataset:squad",
"arxiv:2005.07683",
"transformers",
"bert-base",
"license:mit",
"autotrain_compatible"
]
| question-answering | false | madlag | null | madlag/bert-base-uncased-squad1.1-block-sparse-0.07-v1 | 8 | null | transformers | 13,155 | ---
language: en
thumbnail:
license: mit
tags:
- question-answering
- bert
- bert-base
datasets:
- squad
metrics:
- squad
widget:
- text: "Where is the Eiffel Tower located?"
context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower."
- text: "Who is Frederic Chopin?"
context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano."
---
## BERT-base uncased model fine-tuned on SQuAD v1
This model is block sparse: the **linear** layers contains **7.5%** of the original weights.
The model contains **28.2%** of the original weights **overall**.
The training use a modified version of Victor Sanh [Movement Pruning](https://arxiv.org/abs/2005.07683) method.
That means that with the [block-sparse](https://github.com/huggingface/pytorch_block_sparse) runtime it ran **1.92x** faster than an dense networks on the evaluation, at the price of some impact on the accuracy (see below).
This model was fine-tuned from the HuggingFace [BERT](https://www.aclweb.org/anthology/N19-1423/) base uncased checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the equivalent model [csarron/bert-base-uncased-squad-v1](https://huggingface.co/csarron/bert-base-uncased-squad-v1).
This model is case-insensitive: it does not make a difference between english and English.
## Pruning details
A side-effect of the block pruning is that some of the attention heads are completely removed: 106 heads were removed on a total of 144 (73.6%).
Here is a detailed view on how the remaining heads are distributed in the network after pruning.

## Density plot
<script src="/madlag/bert-base-uncased-squad1.1-block-sparse-0.07-v1/raw/main/model_card/density.js" id="9301e950-59b1-497b-a2c5-25c24e07b3a0"></script>
## Details
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD1.1 | train | 90.6K |
| SQuAD1.1 | eval | 11.1k |
### Fine-tuning
- Python: `3.8.5`
- Machine specs:
```CPU: Intel(R) Core(TM) i7-6700K CPU
Memory: 64 GiB
GPUs: 1 GeForce GTX 3090, with 24GiB memory
GPU driver: 455.23.05, CUDA: 11.1
```
### Results
**Pytorch model file size**: `335M` (original BERT: `438M`)
| Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))|
| ------ | --------- | --------- |
| **EM** | **71.88** | **80.8** |
| **F1** | **81.36** | **88.5** |
## Example Usage
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="madlag/bert-base-uncased-squad1.1-block-sparse-0.07-v1",
tokenizer="madlag/bert-base-uncased-squad1.1-block-sparse-0.07-v1"
)
predictions = qa_pipeline({
'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.",
'question': "Who is Frederic Chopin?",
})
print(predictions)
``` |
marcosscarpim/t5-small-finetuned-en-to-ro | 25244166aaaa6810da8df20f91b860e22e44b019 | 2021-12-03T11:44:04.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | marcosscarpim | null | marcosscarpim/t5-small-finetuned-en-to-ro | 8 | null | transformers | 13,156 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.3228
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4088
- Bleu: 7.3228
- Gen Len: 18.2581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 0.5959 | 0.4 | 30516 | 1.4088 | 7.3228 | 18.2581 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
masoudmzb/wav2vec2-xlsr-multilingual-53-fa | 4129748ad6295d2f73c155a5d0509a46f5e42f28 | 2021-12-10T07:10:05.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:2006.13979",
"transformers"
]
| automatic-speech-recognition | false | masoudmzb | null | masoudmzb/wav2vec2-xlsr-multilingual-53-fa | 8 | 3 | transformers | 13,157 | # wav2vec 2.0 multilingual ( Finetued )
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information.
[Paper](https://arxiv.org/abs/2006.13979)
Authors: Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli
**Abstract** This paper presents XLSR which learns cross-lingual speech representations by pretraining a single model from the raw waveform of speech in multiple languages. We build on wav2vec 2.0 which is trained by solving a contrastive task over masked latent speech representations and jointly learns a quantization of the latents shared across languages. The resulting model is fine-tuned on labeled data and experiments show that cross-lingual pretraining significantly outperforms monolingual pretraining. On the CommonVoice benchmark, XLSR shows a relative phoneme error rate reduction of 72% compared to the best known results. On BABEL, our approach improves word error rate by 16% relative compared to a comparable system. Our approach enables a single multilingual speech recognition model which is competitive to strong individual models. Analysis shows that the latent discrete speech representations are shared across languages with increased sharing for related languages. We hope to catalyze research in low-resource speech understanding by releasing XLSR-53, a large model pretrained in 53 languages.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Persian (Farsi) using [Common Voice](https://huggingface.co/datasets/common_voice) plus Our own created Dataset(1/3 of total dataset). When using this model, make sure that your speech input is sampled at 16kHz.
## Evaluation: 🌡️
We have evaluated the model on private dataset with different type of audios (unfortunately the dataset for testing and validation is not publicly available but to see a sample of the dataset [check this link)](https://github.com/shenasa-ai/speech2text#part-of-our-dataset-v01--) :
| Name | test dataset (wer) |
| :----------------------------------------------------------: | :-----------------: |
| [m3hrdadfi/wav2vec2-large-xlsr-persian-v3](https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v3) | 0.56754 |
| [This New Model](https://huggingface.co/masoudmzb/wav2vec2-xlsr-multilingual-53-fa) | **0.40815** |
| Base Multilingual Model | 0.69746 |
- This Table show if we add more data we will have much better result
## How to use❓
### Use FineTuned Model
This model is finetuned on [m3hrdadfi/wav2vec2-large-xlsr-persian-v3](https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v3) , so the process for train or evaluation is same
> ```bash
> # requirement packages
> !pip install git+https://github.com/huggingface/datasets.git
> !pip install git+https://github.com/huggingface/transformers.git
> !pip install torchaudio
> !pip install librosa
> !pip install jiwer
> !pip install parsivar
> !pip install num2fawords
> ```
**Normalizer**
```bash
# Normalizer
!wget -O normalizer.py https://huggingface.co/m3hrdadfi/"wav2vec2-large-xlsr-persian-v3/raw/main/dictionary.py
!wget -O normalizer.py https://huggingface.co/m3hrdadfi/"wav2vec2-large-xlsr-persian-v3/raw/main/normalizer.py
```
If you are not sure your transcriptions are clean or not (having weird characters or any other alphabete chars ) use this code provided by [m3hrdadfi/wav2vec2-large-xlsr-persian-v3](https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v3)
**Cleaning** (Fill the data part with your own data dir)
```python
from normalizer import normalizer
def cleaning(text):
if not isinstance(text, str):
return None
return normalizer({"sentence": text}, return_dict=False)
# edit these parts with your own data directory
data_dir = "data"
test = pd.read_csv(f"{data_dir}/yourtest.tsv", sep=" ")
test["path"] = data_dir + "/clips/" + test["path"]
print(f"Step 0: {len(test)}")
test["status"] = test["path"].apply(lambda path: True if os.path.exists(path) else None)
test = test.dropna(subset=["path"])
test = test.drop("status", 1)
print(f"Step 1: {len(test)}")
test["sentence"] = test["sentence"].apply(lambda t: cleaning(t))
test = test.dropna(subset=["sentence"])
print(f"Step 2: {len(test)}")
test = test.reset_index(drop=True)
print(test.head())
test = test[["path", "sentence"]]
test.to_csv("/content/test.csv", sep=" ", encoding="utf-8", index=False)
```
**Prediction**
```python
import numpy as np
import pandas as pd
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset, load_metric
import IPython.display as ipd
model_name_or_path = "masoudmzb/wav2vec2-xlsr-multilingual-53-fa"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(model_name_or_path, device)
processor = Wav2Vec2Processor.from_pretrained(model_name_or_path)
model = Wav2Vec2ForCTC.from_pretrained(model_name_or_path).to(device)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, processor.feature_extractor.sampling_rate)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(
batch["speech"],
sampling_rate=processor.feature_extractor.sampling_rate,
return_tensors="pt",
padding=True
)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
return batch
# edit these parts with your own data directory
dataset = load_dataset("csv", data_files={"test": "/path_to/your_test.csv"}, delimiter=" ")["test"]
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict, batched=True, batch_size=4)
```
**WER Score**
```python
wer = load_metric("wer")
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"])))
```
**Output**
```python
max_items = np.random.randint(0, len(result), 20).tolist()
for i in max_items:
reference, predicted = result["sentence"][i], result["predicted"][i]
print("reference:", reference)
print("predicted:", predicted)
print('---')
```
## training details: 🔭
One model was trained on Persian Mozilla dataset before So we Decided to continue from that one. Model is warm started from `mehrdadfa`’s [checkpoint](https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v3)
- For more details, you can take a look at config.json at the model card in 🤗 Model Hub
- The model trained 84000 steps, equal to 12.42 Epochs.
- The base model to finetune was https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v3/tree/main
## Fine Tuning Recommendations: 🐤
For fine tuning you can check the link below. but be aware some Tips. you may need gradient_accumulation because you need more batch size. the are many hyperparameters make sure you set them properly :
- learning_rate
- attention_dropout
- hidden_dropout
- feat_proj_dropout
- mask_time_prob
- layer_drop
### Fine Tuning Examples 👷♂️👷♀️
| Dataset | Fine Tuning Example |
| ------------------------------------------------ | ------------------------------------------------------------ |
| Fine Tune on Mozilla Turkish Dataset | <a href="https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_%F0%9F%A4%97_Transformers.ipynb"><img src="https://img.shields.io/static/v1?label=Colab&message=Fine-tuning Example&logo=Google%20Colab&color=f9ab00"></a> |
| Sample Code for Other Dataset And other Language | [github_link](https://github.com/m3hrdadfi/notebooks/) |
## Contact us: 🤝
If you have a technical question regarding the model, pretraining, code or publication, please create an issue in the repository. This is the fastest way to reach us.
## Citation: ↩️
we didn't publish any papers on the work. However, if you did, please cite us properly with an entry like one below.
```bibtex
@misc{wav2vec2-xlsr-multilingual-53-fa,
author = {Paparnchi, Seyyed Mohammad Masoud},
title = {wav2vec2-xlsr-multilingual-53-fa},
year = 2021,
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/Hamtech-ai/wav2vec2-fa}},
}
``` |
maximedb/paws-x-all | d232fd88b66349e36140f094be42f7cc925fbcf9 | 2021-10-20T14:53:56.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | maximedb | null | maximedb/paws-x-all | 8 | null | transformers | 13,158 | Entry not found |
megantosh/flair-arabic-MSA-aqmar | 1deaf350c222912700cfdaf7779972f0e33f0653 | 2022-03-09T22:13:31.000Z | [
"pytorch",
"ar",
"dataset:AQMAR",
"dataset:ANERcorp",
"flair",
"Text Classification",
"token-classification",
"sequence-tagger-model",
"license:apache-2.0"
]
| token-classification | false | megantosh | null | megantosh/flair-arabic-MSA-aqmar | 8 | null | flair | 13,159 | ---
language: ar
license: apache-2.0
datasets:
- AQMAR
- ANERcorp
thumbnail: https://www.informatik.hu-berlin.de/en/forschung-en/gebiete/ml-en/resolveuid/a6f82e0d7fa446a59c902cac4cafa9cb/@@images/image/preview
tags:
- flair
- Text Classification
- token-classification
- sequence-tagger-model
metrics:
- f1
widget:
- text: "اختارها خيري بشارة كممثلة، دون سابقة معرفة أو تجربة تمثيلية، لتقف بجانب فاتن حمامة في فيلم «يوم مر ويوم حلو» (1988) وهي ما زالت شابة لم تتخطَ عامها الثاني"
---
# Arabic NER Model for AQMAR dataset
Training was conducted over 86 epochs, using a linear decaying learning rate of 2e-05, starting from 0.3 and a batch size of 48 with fastText and Flair forward and backward embeddings.
## Original Dataset:
- [AQMAR](http://www.cs.cmu.edu/~ark/ArabicNER/)
## Results:
- F1-score (micro) 0.9323
- F1-score (macro) 0.9272
| | True Posititves | False Positives | False Negatives | Precision | Recall | class-F1 |
|------|-----|----|----|---------|--------|----------|
| LOC | 164 | 7 | 13 | 0.9591 | 0.9266 | 0.9425 |
| MISC | 398 | 22 | 37 | 0.9476 | 0.9149 | 0.9310 |
| ORG | 65 | 6 | 9 | 0.9155 | 0.8784 | 0.8966 |
| PER | 199 | 13 | 13 | 0.9387 | 0.9387 | 0.9387 |
---
# Usage
```python
from flair.data import Sentence
from flair.models import SequenceTagger
import pyarabic.araby as araby
from icecream import ic
arTagger = SequenceTagger.load('megantosh/flair-arabic-MSA-aqmar')
sentence = Sentence('George Washington went to Washington .')
arSentence = Sentence('عمرو عادلي أستاذ للاقتصاد السياسي المساعد في الجامعة الأمريكية بالقاهرة .')
# predict NER tags
tagger.predict(sentence)
arTagger.predict(arSentence)
# print sentence with predicted tags
ic(sentence.to_tagged_string)
ic(arSentence.to_tagged_string)
```
# Example
see an example from a [similar NER model in Flair](https://huggingface.co/megantosh/flair-arabic-multi-ner)
# Model Configuration
```python
(embeddings): StackedEmbeddings(
(list_embedding_0): WordEmbeddings('ar')
(list_embedding_1): FlairEmbeddings(
(lm): LanguageModel(
(drop): Dropout(p=0.1, inplace=False)
(encoder): Embedding(7125, 100)
(rnn): LSTM(100, 2048)
(decoder): Linear(in_features=2048, out_features=7125, bias=True)
)
)
(list_embedding_2): FlairEmbeddings(
(lm): LanguageModel(
(drop): Dropout(p=0.1, inplace=False)
(encoder): Embedding(7125, 100)
(rnn): LSTM(100, 2048)
(decoder): Linear(in_features=2048, out_features=7125, bias=True)
)
)
)
(word_dropout): WordDropout(p=0.05)
(locked_dropout): LockedDropout(p=0.5)
(embedding2nn): Linear(in_features=4396, out_features=4396, bias=True)
(rnn): LSTM(4396, 256, batch_first=True, bidirectional=True)
(linear): Linear(in_features=512, out_features=14, bias=True)
(beta): 1.0
(weights): None
(weight_tensor) None
)"
2021-03-31 22:19:50,654 ----------------------------------------------------------------------------------------------------
2021-03-31 22:19:50,654 Corpus: "Corpus: 3025 train + 336 dev + 373 test sentences"
2021-03-31 22:19:50,654 ----------------------------------------------------------------------------------------------------
2021-03-31 22:19:50,654 Parameters:
2021-03-31 22:19:50,654 - learning_rate: "0.3"
2021-03-31 22:19:50,654 - mini_batch_size: "48"
2021-03-31 22:19:50,654 - patience: "3"
2021-03-31 22:19:50,654 - anneal_factor: "0.5"
2021-03-31 22:19:50,654 - max_epochs: "150"
2021-03-31 22:19:50,654 - shuffle: "True"
2021-03-31 22:19:50,654 - train_with_dev: "False"
2021-03-31 22:19:50,654 - batch_growth_annealing: "False"
2021-03-31 22:19:50,655 ------------------------------------
```
Due to the right-to-left in left-to-right context, some formatting errors might occur. and your code might appear like [this](https://ibb.co/ky20Lnq), (link accessed on 2020-10-27)
# Citation
*if you use this model, please consider citing [this work](https://www.researchgate.net/publication/358956953_Sequence_Labeling_Architectures_in_Diglossia_-_a_case_study_of_Arabic_and_its_dialects):*
```latex
@unpublished{MMHU21
author = "M. Megahed",
title = "Sequence Labeling Architectures in Diglossia",
year = {2021},
doi = "10.13140/RG.2.2.34961.10084"
url = {https://www.researchgate.net/publication/358956953_Sequence_Labeling_Architectures_in_Diglossia_-_a_case_study_of_Arabic_and_its_dialects}
}
``` |
megantosh/flair-arabic-dialects-codeswitch-egy-lev | f790d3589a242f91b0c78eeb9e8dfdf42a9b12b6 | 2022-03-09T22:12:57.000Z | [
"pytorch",
"ar",
"en",
"dataset:4Dialects",
"dataset:MADAR",
"dataset:CSCS",
"flair",
"token-classification",
"sequence-tagger-model",
"Dialectal Arabic",
"Code-Switching",
"Code-Mixing",
"license:apache-2.0"
]
| token-classification | false | megantosh | null | megantosh/flair-arabic-dialects-codeswitch-egy-lev | 8 | null | flair | 13,160 | ---
language:
- ar
- en
license: apache-2.0
datasets:
- 4Dialects
- MADAR
- CSCS
thumbnail: https://www.informatik.hu-berlin.de/en/forschung-en/gebiete/ml-en/resolveuid/a6f82e0d7fa446a59c902cac4cafa9cb/@@images/image/preview
tags:
- flair
- token-classification
- sequence-tagger-model
- Dialectal Arabic
- Code-Switching
- Code-Mixing
metrics:
- f1
widget:
- text: "طلعوا جماعة الممانعة بالسياسة ما بيعرفوا ولا بالصحة بيعرفوا ولا حتى بالدين"
- text: "أعلم أن هذا يبدو غير عادل ، لكن لا يمكن أن يكون هناك ظلم"
- text: "أنا عارف أن الموضوع ده شكله مش عادل ، بس لا يمكن أن يكون فيه ظلم"
---
# Arabic Flair + fastText Part-of-Speech tagging Model (Egyptian and Levant)
Pretrained Part-of-Speech tagging model built on a joint corpus written in Egyptian and Levantine (Jordanian, Lebanese, Palestinian, Syrian) dialects with code-switching of Egyptian Arabic and English. The model is trained using [Flair](https://aclanthology.org/C18-1139/) (forward+backward)and [fastText](https://fasttext.cc) embeddings.
# Pretraining Corpora:
This sequence labeling model was pretrained on three corpora jointly:
1. [4 Dialects](https://huggingface.co/datasets/viewer/?dataset=arabic_pos_dialect)
A Dialectal Arabic Datasets containing four dialects of Arabic, Egyptian (EGY), Levantine (LEV), Gulf (GLF), and Maghrebi (MGR). Each dataset consists of a set of 350 manually segmented and PoS tagged tweets.
2. [UD South Levantine Arabic MADAR](https://universaldependencies.org/treebanks/ajp_madar/index.html)
A Dataset with 100 manually-annotated sentences taken from the [MADAR](https://camel.abudhabi.nyu.edu/madar/) (Multi-Arabic Dialect Applications and Resources) project by [Shorouq Zahra](mailto:[email protected]).
3. Parts of the Cairo Students Code-Switch (CSCS) corpus developed for ["Collection and Analysis of Code-switch Egyptian Arabic-English Speech Corpus"](https://aclanthology.org/L18-1601.pdf) by Hamed et al.
# Usage
```python
from flair.data import Sentence
from flair.models import SequenceTagger
tagger = SequenceTagger.load("megantosh/flair-arabic-dialects-codeswitch-egy-lev")
sentence = Sentence('عمرو عادلي أستاذ للاقتصاد السياسي المساعد في الجامعة الأمريكية بالقاهرة .')
tagger.predict(sentence)
for entity in sentence.get_spans('pos'):
print(entity)
```
Due to the right-to-left in left-to-right context, some formatting errors might occur. and your code might appear like [this](https://ibb.co/ky20Lnq), (link accessed on 2020-10-27)
<!--# Example
# Tagset-->
# Scores & Tagset
<details>
| |precision | recall | f1-score | support|
|--|-----------|------|-------------|--------------|
|INTJ | 0.8182 | 0.9000 |0.8571 | 10|
|OUN | 0.9009 | 0.9402 |0.9201 | 435|
|NUM | 0.9524 | 0.8333 | 0.8889 | 24|
|ADJ |0.8762 | 0.7603 | 0.8142 | 121|
|ADP |0.9903 |0.9623 | 0.9761 |106|
| CCONJ | 0.9600 | 0.9730 | 0.9664 | 74|
|PROPN | 0.9333 | 0.9333 | 0.9333 | 15|
| ADV | 0.9135 | 0.8051 | 0.8559 | 118|
|VERB | 0.8852 | 0.9231 | 0.9038 | 117|
|PRON | 0.9620 | 0.9465 | 0.9542 | 187|
|SCONJ | 0.8571 | 0.9474 | 0.9000 | 19|
|PART | 0.9350 | 0.9791 | 0.9565 | 191|
| DET | 0.9348 | 0.9149 | 0.9247 | 47|
|PUNCT | 1.0000 | 1.0000 | 1.0000 | 35|
| AUX | 0.9286 | 0.9811 | 0.9541 | 53|
|MENTION | 0.9231 | 1.0000 | 0.9600 | 12|
| V | 0.8571 | 0.8780 | 0.8675 | 82|
| FUT-PART+V+PREP+PRON |1.0000 | 0.0000 | 0.0000 | 1|
| PROG-PART+V+PRON+PREP+PRON | 0.0000 | 1.0000 | 0.0000 | 0|
|ADJ+NSUFF | 0.6111 | 0.8462 | 0.7097 | 26|
|NOUN+NSUFF | 0.8182 | 0.8438 | 0.8308 | 64|
|PREP+PRON | 0.9565 | 0.9565 | 0.9565 | 23|
| PUNC | 0.9941 | 1.0000 | 0.9971 | 169|
| EOS |1.0000 | 1.0000 | 1.0000 | 70|
| NOUN+PRON | 0.6986 | 0.8500 | 0.7669 | 60|
| V+PRON | 0.7258 | 0.8036 | 0.7627 | 56|
| PART+PRON | 1.0000 | 0.9474 | 0.9730 | 19|
| PROG-PART+V | 0.8333 | 0.9302 | 0.8791 | 43|
| DET+NOUN | 0.9625 | 1.0000 | 0.9809 | 77|
| NOUN+NSUFF+PRON | 0.9091 | 0.7143 | 0.8000 | 14|
| PROG-PART+V+PRON | 0.7083 | 0.9444 | 0.8095 | 18|
| PREP+NOUN+NSUFF | 0.6667 | 0.4000 | 0.5000 5|
| NOUN+NSUFF+NSUFF | 1.0000 | 0.0000 | 0.0000 | 3|
| CONJ | 0.9722 | 1.0000 | 0.9859 | 35|
| V+PRON+PRON | 0.6364 | 0.5833 | 0.6087 | 12|
| FOREIGN | 0.6667 | 0.6667 | 0.6667 | 3|
| PREP+NOUN | 0.6316 | 0.7500 | 0.6857 | 16|
| DET+NOUN+NSUFF | 0.9000 | 0.9310 | 0.9153 | 29|
| DET+ADJ+NSUFF | 1.0000 | 0.5714 | 0.7273 | 7|
| CONJ+PRON | 1.0000 | 0.8750 | 0.9333 | 8|
| NOUN+CASE | 0.0000 | 0.0000 | 0.0000 | 2|
| DET+ADJ | 1.0000 | 0.6667 | 0.8000 | 6|
| PREP | 1.0000 | 0.9718 | 0.9857 | 71|
| CONJ+FUT-PART+V | 0.0000 | 0.0000 | 0.0000 | 1|
| CONJ+V | 0.6667 | 0.7500 | 0.7059 | 8|
| FUT-PART | 1.0000 | 1.0000 | 1.0000 | 2|
| ADJ+PRON | 1.0000 | 0.0000 | 0.0000 | 8|
| CONJ+PREP+NOUN+PRON | 1.0000 | 0.0000 | 0.0000 | 1|
| CONJ+NOUN+PRON | 0.3750 | 1.0000 | 0.5455 | 3|
| PART+ADJ | 1.0000 | 0.0000 | 0.0000 | 1|
| PART+NOUN | 0.5000 | 1.0000 | 0.6667 | 1|
| CONJ+PREP+NOUN | 1.0000 | 0.0000 | 0.0000 | 1|
| CONJ+NOUN | 0.7000 | 0.7778 | 0.7368 | 9|
| URL | 1.0000 | 1.0000 | 1.0000 | 3|
| CONJ+FUT-PART | 1.0000 | 0.0000 | 0.0000 | 1|
| FUT-PART+V | 0.8571 | 0.6000 | 0.7059 | 10|
| PREP+NOUN+NSUFF+NSUFF | 1.0000 | 0.0000 | 0.0000 | 1|
| HASH | 1.0000 | 0.9412 | 0.9697 | 17|
| ADJ+PREP+PRON | 1.0000 | 0.0000 | 0.0000 | 3|
| PREP+NOUN+PRON | 0.0000 | 0.0000 | 0.0000 | 1|
| EMOT | 1.0000 | 0.8889 | 0.9412 | 18|
| CONJ+PREP | 1.0000 | 0.7500 | 0.8571 | 4|
| PREP+DET+NOUN+NSUFF | 1.0000 | 0.7500 | 0.8571 | 4|
| PRON+DET+NOUN+NSUFF | 0.0000 | 1.0000 | 0.0000 | 0|
| V+PREP+PRON | 1.0000 | 0.0000 | 0.0000 | 5|
| V+PRON+PREP+PRON | 0.0000 | 1.0000 | 0.0000 | 0|
| CONJ+NOUN+NSUFF | 0.5000 | 0.5000 | 0.5000 | 2|
| V+NEG-PART | 1.0000 | 0.0000 | 0.0000 | 2|
| PREP+DET+NOUN | 0.9091 | 1.0000 | 0.9524 | 10|
| PREP+V | 1.0000 | 0.0000 | 0.0000 | 2|
| CONJ+PART | 1.0000 | 0.7778 | 0.8750 | 9|
| CONJ+V+PRON | 1.0000 | 1.0000 | 1.0000 | 5|
| PROG-PART+V+PREP+PRON | 1.0000 | 0.5000 | 0.6667 | 2|
| PREP+NOUN+NSUFF+PRON | 1.0000 | 1.0000 | 1.0000 | 1|
| ADJ+CASE | 1.0000 | 0.0000 | 0.0000 | 1|
| PART+NOUN+PRON | 1.0000 | 1.0000 | 1.0000 | 1|
| PART+V | 1.0000 | 0.0000 | 0.0000 | 3|
| PART+V+PRON | 0.0000 | 1.0000 | 0.0000 | 0|
| FUT-PART+V+PRON | 0.0000 | 1.0000 | 0.0000 | 0|
|FUT-PART+V+PRON+PRON | 1.0000 | 0.0000 | 0.0000 | 1|
| CONJ+PREP+PRON | 1.0000 | 0.0000 | 0.0000 | 1|
|CONJ+V+PRON+PREP+PRON | 1.0000 | 0.0000 | 0.0000 | 1|
| CONJ+V+PREP+PRON | 0.0000 | 1.0000 | 0.0000 | 0|
|CONJ+DET+NOUN+NSUFF | 1.0000 | 0.0000 | 0.0000 | 1|
| CONJ+DET+NOUN | 0.6667 | 1.0000 | 0.8000 | 2|
| CONJ+PREP+DET+NOUN | 1.0000 | 1.0000 | 1.0000 | 1|
| PREP+PART | 1.0000 | 0.0000 | 0.0000 | 2|
| PART+V+PRON+NEG-PART | 0.3333 | 0.3333 | 0.3333 | 3|
| PART+V+NEG-PART | 0.3333 | 0.5000 | 0.4000 | 2|
| PART+PREP+NEG-PART | 1.0000 | 1.0000 | 1.0000 | 3|
| PART+PROG-PART+V+NEG-PART | 1.0000 | 0.3333 | 0.5000 | 3|
| PREP+DET+NOUN+NSUFF+PREP+PRON | 1.0000 | 0.0000 | 0.0000 | 1|
| PREP+PRON+DET+NOUN | 0.0000 | 1.0000 | 0.0000 | 0|
| PART+NSUFF | 1.0000 | 0.0000 | 0.0000 | 1|
| CONJ+PROG-PART+V+PRON | 1.0000 | 1.0000 | 1.0000 | 1|
| PART+PREP+PRON | 1.0000 | 0.0000 | 0.0000 | 1|
| CONJ+PART+PREP | 1.0000 | 0.0000 | 0.0000 | 1|
| NUM+NSUFF | 0.6667 | 0.6667 | 0.6667 | 3|
| CONJ+PART+V+PRON+NEG-PART | 1.0000 | 1.0000 | 1.0000 | 1|
| PART+NOUN+NEG-PART | 1.0000 | 1.0000 | 1.0000 | 1|
| CONJ+ADJ+NSUFF | 1.0000 | 0.0000 | 0.0000 | 1|
| PREP+ADJ | 1.0000 | 0.0000 | 0.0000 | 1|
| ADJ+NSUFF+PRON | 1.0000 | 0.0000 | 0.0000 | 2|
| CONJ+PROG-PART+V | 1.0000 | 0.0000 | 0.0000 | 1|
| CONJ+PART+PROG-PART+V+PREP+PRON+NEG-PART | 1.0000 | 0.0000 | 0.0000 | 1|
| CONJ+PART+PREP+PRON+NEG-PART | 0.0000 | 1.0000 | 0.0000 | 0|
| PREP+PART+PRON | 1.0000 | 0.0000 | 0.0000 | 1|
| CONJ+ADV+NSUFF | 1.0000 | 0.0000 |0.0000 | 1|
| CONJ+ADV | 0.0000 | 1.0000 | 0.0000 | 0|
| PART+NOUN+PRON+NEG-PART | 0.0000 | 1.0000 | 0.0000 | 0|
| CONJ+ADJ | 1.0000 | 1.0000 | 1.0000 | 1|
</details>
- F-score (micro): 0.8974
- F-score (macro): 0.5188
- Accuracy (incl. no class): 0.901
Expand details below to show class scores for each tag. Note that tag compounds (a tag made for multiple agglutinated parts of speech) are considered as separate ones.
# Citation
*if you use this model, please consider citing [this work](https://www.researchgate.net/publication/358956953_Sequence_Labeling_Architectures_in_Diglossia_-_a_case_study_of_Arabic_and_its_dialects):*
```latex
@unpublished{MMHU21
author = "M. Megahed",
title = "Sequence Labeling Architectures in Diglossia",
year = {2021},
doi = "10.13140/RG.2.2.34961.10084"
url = {https://www.researchgate.net/publication/358956953_Sequence_Labeling_Architectures_in_Diglossia_-_a_case_study_of_Arabic_and_its_dialects}
}
``` |
michaelrglass/albert-base-rci-wtq-row | f20f5d3ad1ae952f2be687eb67cb9a4563ed1990 | 2021-06-16T16:05:15.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | michaelrglass | null | michaelrglass/albert-base-rci-wtq-row | 8 | null | transformers | 13,161 | Entry not found |
miguelvictor/python-fromzero-reformerlm | d8e8b51fdf06e74b719287314104902643b4fd95 | 2021-04-29T05:19:10.000Z | [
"pytorch",
"tensorboard",
"reformer",
"text-generation",
"transformers"
]
| text-generation | false | miguelvictor | null | miguelvictor/python-fromzero-reformerlm | 8 | null | transformers | 13,162 | Entry not found |
miguelvictor/python-t5-base | 616b22d8a22a56d05deff2df807e444039d9edc4 | 2021-04-29T04:19:26.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | miguelvictor | null | miguelvictor/python-t5-base | 8 | null | transformers | 13,163 | Entry not found |
milyiyo/electra-base-gen-finetuned-amazon-review | 230ecaa4df4858fe5f557e06254f0468e7435fec | 2022-01-18T21:21:53.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | milyiyo | null | milyiyo/electra-base-gen-finetuned-amazon-review | 8 | null | transformers | 13,164 | ---
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: electra-base-gen-finetuned-amazon-review
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.5024
- name: F1
type: f1
value: 0.5063190059782597
- name: Precision
type: precision
value: 0.5121183330982292
- name: Recall
type: recall
value: 0.5024
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-gen-finetuned-amazon-review
This model is a fine-tuned version of [mrm8488/electricidad-base-generator](https://huggingface.co/mrm8488/electricidad-base-generator) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8030
- Accuracy: 0.5024
- F1: 0.5063
- Precision: 0.5121
- Recall: 0.5024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss | Precision | Recall |
|:-------------:|:-----:|:----:|:--------:|:------:|:---------------:|:---------:|:------:|
| 0.5135 | 1.0 | 1000 | 0.4886 | 0.4929 | 1.6580 | 0.5077 | 0.4886 |
| 0.4138 | 2.0 | 2000 | 0.5044 | 0.5093 | 1.7951 | 0.5183 | 0.5044 |
| 0.4244 | 3.0 | 3000 | 0.5022 | 0.5068 | 1.8108 | 0.5141 | 0.5022 |
| 0.4231 | 6.0 | 6000 | 1.7636 | 0.4972 | 0.5018 | 0.5092 | 0.4972 |
| 0.3574 | 7.0 | 7000 | 1.8030 | 0.5024 | 0.5063 | 0.5121 | 0.5024 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
milyiyo/electra-small-finetuned-amazon-review | f2346ba0b85fe71b7461375cf3e029f4841546ed | 2022-01-18T17:47:17.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | milyiyo | null | milyiyo/electra-small-finetuned-amazon-review | 8 | null | transformers | 13,165 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: electra-small-finetuned-amazon-review
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.5504
- name: F1
type: f1
value: 0.5457527808330634
- name: Precision
type: precision
value: 0.5428695841337288
- name: Recall
type: recall
value: 0.5504
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-small-finetuned-amazon-review
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0560
- Accuracy: 0.5504
- F1: 0.5458
- Precision: 0.5429
- Recall: 0.5504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.2172 | 1.0 | 1000 | 1.1014 | 0.5216 | 0.4902 | 0.4954 | 0.5216 |
| 1.0027 | 2.0 | 2000 | 1.0388 | 0.549 | 0.5471 | 0.5494 | 0.549 |
| 0.9035 | 3.0 | 3000 | 1.0560 | 0.5504 | 0.5458 | 0.5429 | 0.5504 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ml6team/gpt2-small-german-finetune-oscar | c228ad2832126c182770a581edcd26d27fab0c08 | 2021-05-23T09:48:35.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"de",
"transformers",
"adaption",
"recycled",
"gpt2-small"
]
| text-generation | false | ml6team | null | ml6team/gpt2-small-german-finetune-oscar | 8 | 6 | transformers | 13,166 | ---
language: de
widget:
- text: "es wird entschieden, dass es"
tags:
- adaption
- recycled
- gpt2-small
pipeline_tag: text-generation
---
# German finetuned GPT2 |
monologg/kocharelectra-small-generator | 5be156449a119eb4708a6973efb76e81bfc521dc | 2020-05-27T17:38:43.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | monologg | null | monologg/kocharelectra-small-generator | 8 | null | transformers | 13,167 | Entry not found |
mrm8488/b2b-en-paraphrasing-no-questions | acbe4dfdf9fb86774fcde04204c4f4db4a4413e1 | 2021-05-13T18:38:46.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | mrm8488 | null | mrm8488/b2b-en-paraphrasing-no-questions | 8 | null | transformers | 13,168 | Entry not found |
mrm8488/bert-tiny-2-finetuned-squadv2 | 389f68d2f7b01ec5054cc5307c09fba11dddcbbd | 2021-05-20T00:38:57.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | mrm8488 | null | mrm8488/bert-tiny-2-finetuned-squadv2 | 8 | null | transformers | 13,169 | Entry not found |
mrm8488/bert-tiny-wrslb-finetuned-squadv1 | e6d959f9c7dd801b5df282c7758317dc588f887a | 2021-05-20T00:41:08.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | mrm8488 | null | mrm8488/bert-tiny-wrslb-finetuned-squadv1 | 8 | null | transformers | 13,170 | Entry not found |
mrm8488/convbert-base-spanish | 81f731256ed3bf023798730c8d0dd2a8a24999e9 | 2021-08-13T20:35:31.000Z | [
"pytorch",
"tf",
"convbert",
"feature-extraction",
"es",
"dataset:large_spanish_corpus",
"arxiv:2008.02496",
"transformers",
"license:mit"
]
| feature-extraction | false | mrm8488 | null | mrm8488/convbert-base-spanish | 8 | 1 | transformers | 13,171 | ---
language: es
datasets:
- large_spanish_corpus
license: mit
---
# ConvBERT base pre-trained on large_spanish_corpus
The ConvBERT architecture is presented in the ["ConvBERT: Improving BERT with Span-based Dynamic Convolution"](https://arxiv.org/abs/2008.02496) paper.
## Metrics on evaluation set
```
disc_accuracy = 0.9488542
disc_auc = 0.8833056
disc_loss = 0.15933733
disc_precision = 0.79224133
disc_recall = 0.27443287
global_step = 1000000
loss = 9.658503
masked_lm_accuracy = 0.6177698
masked_lm_loss = 1.7050561
sampled_masked_lm_accuracy = 0.5379228
```
## Usage
```python
from transformers import AutoModel, AutoTokenizer
model_name = "mrm8488/convbert-base-spanish"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Narrativa](https://www.narrativa.com/)
> Made with <span style="color: #e25555;">♥</span> in Spain |
mrm8488/deberta-v3-small-finetuned-qnli | c67df394f2ea34c46f106e9e67c8646ff408e30e | 2021-12-06T20:05:43.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | mrm8488 | null | mrm8488/deberta-v3-small-finetuned-qnli | 8 | 1 | transformers | 13,172 | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: deberta-v3-small
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9150649826102873
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa-v3-small fine-tuned on QNLI
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2143
- Accuracy: 0.9151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2823 | 1.0 | 6547 | 0.2143 | 0.9151 |
| 0.1996 | 2.0 | 13094 | 0.2760 | 0.9103 |
| 0.1327 | 3.0 | 19641 | 0.3293 | 0.9169 |
| 0.0811 | 4.0 | 26188 | 0.4278 | 0.9193 |
| 0.05 | 5.0 | 32735 | 0.5110 | 0.9176 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
mrm8488/distilgpt2-finetuned-reddit-tifu | a08e43f6a4adc941d23c6bb40434b9c4c24d863f | 2021-05-23T10:22:22.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | mrm8488 | null | mrm8488/distilgpt2-finetuned-reddit-tifu | 8 | null | transformers | 13,173 | Entry not found |
mrm8488/electricidad-small-finetuned-squadv1-es | 84228c6be59fd577d0700f145d3a421cd9da331d | 2022-02-09T13:29:35.000Z | [
"pytorch",
"electra",
"question-answering",
"es",
"transformers",
"QA",
"SQuAD",
"autotrain_compatible"
]
| question-answering | false | mrm8488 | null | mrm8488/electricidad-small-finetuned-squadv1-es | 8 | 1 | transformers | 13,174 | ---
language: es
thumbnail: https://imgur.com/uxAvBfh
tags:
- QA
- SQuAD
---
# Electricidad small + Spanish SQuAD v1 ⚡❓
[Electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) fine-tuned on [Spanish SQUAD v1.1 dataset](https://github.com/ccasimiro88/TranslateAlignRetrieve/tree/master/SQuAD-es-v1.1) for **Q&A** downstream task.
## Details of the downstream task (Q&A) - Dataset 📚
[SQuAD-es-v1.1](https://github.com/ccasimiro88/TranslateAlignRetrieve/tree/master/SQuAD-es-v1.1)
| Dataset split | # Samples |
| ------------- | --------- |
| Train | 130 K |
| Test | 11 K |
## Model training 🏋️
The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command:
```bash
python /content/transformers/examples/question-answering/run_squad.py \
--model_type electra \
--model_name_or_path 'mrm8488/electricidad-small-discriminator' \
--do_eval \
--do_train \
--do_lower_case \
--train_file '/content/dataset/train-v1.1-es.json' \
--predict_file '/content/dataset/dev-v1.1-es.json' \
--per_gpu_train_batch_size 16 \
--learning_rate 3e-5 \
--num_train_epochs 10 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir '/content/electricidad-small-finetuned-squadv1-es' \
--overwrite_output_dir \
--save_steps 1000
```
## Test set Results 🧾
| Metric | # Value |
| ------ | --------- |
| **EM** | **46.82** |
| **F1** | **64.79** |
```json
{
'exact': 46.82119205298013,
'f1': 64.79435260021918,
'total': 10570,
'HasAns_exact': 46.82119205298013,
HasAns_f1': 64.79435260021918,
'HasAns_total': 10570,
'best_exact': 46.82119205298013,
'best_exact_thresh': 0.0,
'best_f1': 64.79435260021918,
'best_f1_thresh': 0.0
}
```
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="mrm8488/electricidad-small-finetuned-squadv1-es",
tokenizer="mrm8488/electricidad-small-finetuned-squadv1-es"
)
context = "Manuel ha creado una versión del modelo Electra small en español que alcanza una puntuación F1 de 65 en el dataset SQUAD-es y sólo pesa 50 MB"
q1 = "Cuál es su marcador F1?"
q2 = "¿Cuál es el tamaño del modelo?"
q3 = "¿Quién lo ha creado?"
q4 = "¿Que es lo que ha hecho Manuel?"
questions = [q1, q2, q3, q4]
for question in questions:
result = qa_pipeline({
'context': context,
'question': question})
print(result)
# Output:
{'score': 0.14836778166355025, 'start': 98, 'end': 100, 'answer': '65'}
{'score': 0.32219420810758237, 'start': 136, 'end': 140, 'answer': '50 MB'}
{'score': 0.9672326951118713, 'start': 0, 'end': 6, 'answer': 'Manuel'}
{'score': 0.23552458113848118, 'start': 10, 'end': 53, 'answer': 'creado una versión del modelo Electra small'}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/gpt2-imdb-neg | 42e3722e4c921a16638afb40ce864c05d5d762fb | 2021-05-23T10:27:14.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| null | false | mrm8488 | null | mrm8488/gpt2-imdb-neg | 8 | null | transformers | 13,175 | # GPT2-IMDB-neg (LM + RL) 🎞😡✍
All credits to [@lvwerra](https://twitter.com/lvwerra)
## What is it?
A small GPT2 (`lvwerra/gpt2-imdb`) language model fine-tuned to produce **negative** movie reviews based the [IMDB dataset](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews). The model is trained with rewards from a BERT sentiment classifier (`lvwerra/gpt2-imdb`) via **PPO**.
## Why?
I wanted to reproduce the experiment [lvwerra/gpt2-imdb-pos](https://huggingface.co/lvwerra/gpt2-imdb-pos) but for generating **negative** movie reviews.
## Training setting
The model was trained for `100` optimisation steps with a batch size of `256` which corresponds to `25600` training samples. The full experiment setup (for positive samples) in [trl repo](https://lvwerra.github.io/trl/04-gpt2-sentiment-ppo-training/).
## Examples
A few examples of the model response to a query before and after optimisation:
| query | response (before) | response (after) | rewards (before) | rewards (after) |
|-------|-------------------|------------------|------------------|-----------------|
|This movie is a fine | attempt as far as live action is concerned, n...|example of how bad Hollywood in theatrics pla...| 2.118391 | -3.31625|
|I have watched 3 episodes |with this guy and he is such a talented actor...| but the show is just plain awful and there ne...| 2.681171| -4.512792|
|We know that firefighters and| police officers are forced to become populari...| other chains have going to get this disaster ...| 1.367811| -3.34017|
## Training logs and metrics <img src="https://gblobscdn.gitbook.com/spaces%2F-Lqya5RvLedGEWPhtkjU%2Favatar.png?alt=media" width="25" height="25">
Watch the whole training logs and metrics on [W&B](https://app.wandb.ai/mrm8488/gpt2-sentiment-negative?workspace=user-mrm8488)
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/mT5-small-finetuned-multi-question-generation | 37310c6745348a51c1c4659bef54e0ba18669bf6 | 2020-11-23T10:13:23.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | mrm8488 | null | mrm8488/mT5-small-finetuned-multi-question-generation | 8 | null | transformers | 13,176 | Entry not found |
mrm8488/mbart-large-finetuned-bible-es-en-translation | daf88adfccccd50b7841fa39209e9795e4e6f95f | 2021-01-14T22:32:54.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"es",
"en",
"dataset:bible_para",
"transformers",
"translation",
"autotrain_compatible"
]
| translation | false | mrm8488 | null | mrm8488/mbart-large-finetuned-bible-es-en-translation | 8 | null | transformers | 13,177 | ---
tags:
- translation
language:
- es
- en
datasets:
- bible_para
---
### mbart-large-es-en
This is mbart-large-cc25, finetuned on bible_para for Spanish to English translation.
It scores BLEU **29.34** |
mrm8488/roberta-base-1B-1-finetuned-squadv2 | bf00ec8a5abe0936235a1a605dbbaf8563c2ec97 | 2021-05-20T18:27:20.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"en",
"transformers",
"autotrain_compatible"
]
| question-answering | false | mrm8488 | null | mrm8488/roberta-base-1B-1-finetuned-squadv2 | 8 | null | transformers | 13,178 | ---
language: en
---
# RoBERTa-base (1B-1) + SQuAD v2 ❓
[roberta-base-1B-1](https://huggingface.co/nyu-mll/roberta-base-1B-1) fine-tuned on [SQUAD v2 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/v2.0/dev/) for **Q&A** downstream task.
## Details of the downstream task (Q&A) - Model 🧠
RoBERTa Pretrained on Smaller Datasets
[NYU Machine Learning for Language](https://huggingface.co/nyu-mll) pretrained RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). They released 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: They combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1.
## Details of the downstream task (Q&A) - Dataset 📚
**S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
**SQuAD2.0** combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.
## Model training 🏋️
The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command:
```bash
python transformers/examples/question-answering/run_squad.py \
--model_type roberta \
--model_name_or_path 'nyu-mll/roberta-base-1B-1' \
--do_eval \
--do_train \
--do_lower_case \
--train_file /content/dataset/train-v2.0.json \
--predict_file /content/dataset/dev-v2.0.json \
--per_gpu_train_batch_size 16 \
--learning_rate 3e-5 \
--num_train_epochs 10 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /content/output \
--overwrite_output_dir \
--save_steps 1000 \
--version_2_with_negative
```
## Test set Results 🧾
| Metric | # Value |
| ------ | --------- |
| **EM** | **64.86** |
| **F1** | **68.99** |
```json
{
'exact': 64.86145034953255,
'f1': 68.9902640378272,
'total': 11873,
'HasAns_exact': 64.03508771929825,
'HasAns_f1': 72.3045554860189,
'HasAns_total': 5928,
'NoAns_exact': 65.68544995794785,
'NoAns_f1': 65.68544995794785,
'NoAns_total': 5945,
'best_exact': 64.86987282068559,
'best_exact_thresh': 0.0,
'best_f1': 68.99868650898054,
'best_f1_thresh': 0.0
}
```
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
QnA_pipeline = pipeline('question-answering', model='mrm8488/roberta-base-1B-1-finetuned-squadv2')
QnA_pipeline({
'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.',
'question': 'What has been discovered by scientists from China ?'
})
# Output:
{'answer': 'A new strain of flu', 'end': 19, 'score': 0.7145650685380576,'start': 0}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/t5-small-spanish-finetuned-squadv1 | 1dd5c12a8cad2c47d83890c94acf229dda3b43c3 | 2021-08-17T22:02:49.000Z | [
"pytorch",
"t5",
"text2text-generation",
"es",
"dataset:squad_es",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | mrm8488 | null | mrm8488/t5-small-spanish-finetuned-squadv1 | 8 | 1 | transformers | 13,179 | ---
language: es
datasets:
- squad_es
widget:
- text: "pregunta: ¿Cuál es el mayor placer de la vida? contexto: El mayor placer de la vida es dormir"
---
# T5 small (Spanish) fine-tuned on SQUAD (ES) for Q&A |
mvonwyl/roberta-twitter-spam-classifier | 0df769e6624a451b9d5f025b8c4ce0e63cfcd916 | 2022-02-01T19:34:59.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | mvonwyl | null | mvonwyl/roberta-twitter-spam-classifier | 8 | null | transformers | 13,180 | ---
tags:
- generated_from_trainer
model-index:
- name: roberta-twitter-spam-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-twitter-spam-classifier
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3856
- Micro-avg-precision: 0.8723
- Micro-avg-recall: 0.8490
- Micro-avg-f1-score: 0.8594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Micro-avg-precision | Micro-avg-recall | Micro-avg-f1-score |
|:-------------:|:-----:|:-----:|:---------------:|:-------------------:|:----------------:|:------------------:|
| 0.4923 | 1.0 | 2762 | 0.5676 | 0.8231 | 0.6494 | 0.6676 |
| 0.535 | 2.0 | 5524 | 0.4460 | 0.8065 | 0.8215 | 0.8132 |
| 0.5492 | 3.0 | 8286 | 0.6005 | 0.6635 | 0.5843 | 0.3906 |
| 0.5947 | 4.0 | 11048 | 0.5710 | 0.7875 | 0.7799 | 0.7835 |
| 0.4976 | 5.0 | 13810 | 0.5194 | 0.8375 | 0.7544 | 0.7800 |
| 0.5263 | 6.0 | 16572 | 0.5491 | 0.8739 | 0.7159 | 0.7475 |
| 0.4701 | 7.0 | 19334 | 0.4609 | 0.8681 | 0.7786 | 0.8069 |
| 0.4566 | 8.0 | 22096 | 0.4100 | 0.8637 | 0.8281 | 0.8430 |
| 0.4339 | 9.0 | 24858 | 0.4395 | 0.8642 | 0.8454 | 0.8540 |
| 0.3906 | 10.0 | 27620 | 0.3856 | 0.8723 | 0.8490 | 0.8594 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
nbroad/xdistil-l12-h384-squad2 | 89db1c3e27e0f87fc0dee50ee0387d209412f9b5 | 2022-07-22T15:16:52.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"bert",
"question-answering",
"dataset:squad_v2",
"transformers",
"model-index",
"autotrain_compatible"
]
| question-answering | false | nbroad | null | nbroad/xdistil-l12-h384-squad2 | 8 | null | transformers | 13,181 | ---
widget:
- context: While deep and large pre-trained models are the state-of-the-art for various
natural language processing tasks, their huge size poses significant challenges
for practical uses in resource constrained settings. Recent works in knowledge
distillation propose task-agnostic as well as task-specific methods to compress
these models, with task-specific ones often yielding higher compression rate.
In this work, we develop a new task-agnostic distillation framework XtremeDistilTransformers
that leverages the advantage of task-specific methods for learning a small universal
model that can be applied to arbitrary tasks and languages. To this end, we study
the transferability of several source tasks, augmentation resources and model
architecture for distillation. We evaluate our model performance on multiple tasks,
including the General Language Understanding Evaluation (GLUE) benchmark, SQuAD
question answering dataset and a massive multi-lingual NER dataset with 41 languages.
example_title: xtremedistil q1
text: What is XtremeDistil?
- context: While deep and large pre-trained models are the state-of-the-art for various
natural language processing tasks, their huge size poses significant challenges
for practical uses in resource constrained settings. Recent works in knowledge
distillation propose task-agnostic as well as task-specific methods to compress
these models, with task-specific ones often yielding higher compression rate.
In this work, we develop a new task-agnostic distillation framework XtremeDistilTransformers
that leverages the advantage of task-specific methods for learning a small universal
model that can be applied to arbitrary tasks and languages. To this end, we study
the transferability of several source tasks, augmentation resources and model
architecture for distillation. We evaluate our model performance on multiple tasks,
including the General Language Understanding Evaluation (GLUE) benchmark, SQuAD
question answering dataset and a massive multi-lingual NER dataset with 41 languages.
example_title: xtremedistil q2
text: On what is the model validated?
datasets:
- squad_v2
metrics:
- f1
- exact
tags:
- question-answering
model-index:
- name: nbroad/xdistil-l12-h384-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 75.4591
verified: true
- name: F1
type: f1
value: 79.3321
verified: true
---
xtremedistil-l12-h384 trained on SQuAD 2.0
"eval_exact": 75.45691906005221
"eval_f1": 79.32502968532793 |
ncduy/bert-finetuned-ner | 23765b590fcccff76e8931e563bba12649eb8751 | 2021-12-06T06:21:38.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | ncduy | null | ncduy/bert-finetuned-ner | 8 | null | transformers | 13,182 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9310572323932047
- name: Recall
type: recall
value: 0.9500168293503871
- name: F1
type: f1
value: 0.9404414827155352
- name: Accuracy
type: accuracy
value: 0.9865191028433508
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0590
- Precision: 0.9311
- Recall: 0.9500
- F1: 0.9404
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0874 | 1.0 | 1756 | 0.0635 | 0.9211 | 0.9369 | 0.9289 | 0.9835 |
| 0.0376 | 2.0 | 3512 | 0.0618 | 0.9342 | 0.9485 | 0.9413 | 0.9858 |
| 0.0226 | 3.0 | 5268 | 0.0590 | 0.9311 | 0.9500 | 0.9404 | 0.9865 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ncduy/opus-mt-en-vi-full-finetuned-en-to-vi | 71be7afa73639039e820b88a79f93c694740cc6c | 2022-01-12T07:10:14.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | ncduy | null | ncduy/opus-mt-en-vi-full-finetuned-en-to-vi | 8 | null | transformers | 13,183 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: opus-mt-en-vi-full-finetuned-en-to-vi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-vi-full-finetuned-en-to-vi
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-vi](https://huggingface.co/Helsinki-NLP/opus-mt-en-vi) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 212
- eval_batch_size: 212
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
nlokam/DialoGPT-digibot3.0-new | 5ea82db484584ef7a2f682f1d65a701e820c47d5 | 2021-11-03T18:45:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | nlokam | null | nlokam/DialoGPT-digibot3.0-new | 8 | null | transformers | 13,184 | ---
tags:
- conversational
---
# DialoGPT-digibot3.0-new Model |
ntrnghia/stsb_vn | 237f03f17eab962af62ddc499f06283b83c658e3 | 2021-05-20T02:09:27.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | ntrnghia | null | ntrnghia/stsb_vn | 8 | null | transformers | 13,185 | Entry not found |
olastor/mcn-en-smm4h | 1e5c1898cf27fd4a026bb02ddf26bcdc94516f71 | 2021-05-20T02:11:39.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | olastor | null | olastor/mcn-en-smm4h | 8 | 1 | transformers | 13,186 | # BERT MCN-Model using SMM4H 2017 (subtask 3) data
The model was trained using [clagator/biobert_v1.1_pubmed_nli_sts](https://huggingface.co/clagator/biobert_v1.1_pubmed_nli_sts) as a base and the smm4h dataset from 2017 from subtask 3.
## Dataset
See [here](https://github.com/olastor/medical-concept-normalization/tree/main/data/smm4h) for the scripts and datasets.
**Attribution**
Sarker, Abeed (2018), “Data and systems for medication-related text classification and concept normalization from Twitter: Insights from the Social Media Mining for Health (SMM4H)-2017 shared task”, Mendeley Data, V2, doi: 10.17632/rxwfb3tysd.2
### Test Results
- Acc: 89.44
- Acc@2: 91.84
- Acc@3: 93.20
- Acc@5: 94.32
- Acc@10: 95.04
Acc@N denotes the accuracy taking the top N predictions of the model into account, not just the first one. |
osanseviero/distilbert-base-uncased-finetuned-emotion | e016f29fc3f28664e3615a8cdc4b03441ec13b9f | 2022-07-14T08:04:43.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | osanseviero | null | osanseviero/distilbert-base-uncased-finetuned-emotion | 8 | null | transformers | 13,187 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.92271004914086
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2251
- Accuracy: 0.9225
- F1: 0.9227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8452 | 1.0 | 250 | 0.3288 | 0.902 | 0.8979 |
| 0.2544 | 2.0 | 500 | 0.2251 | 0.9225 | 0.9227 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.18.0
- Tokenizers 0.10.3
|
osanseviero/pyctcdecode_asr | 5058959c3b7244b789cf3fbc2740c26d82e3cd0d | 2021-08-06T13:53:30.000Z | [
"pytorch",
"tf",
"wav2vec2",
"automatic-speech-recognition",
"generic"
]
| automatic-speech-recognition | false | osanseviero | null | osanseviero/pyctcdecode_asr | 8 | null | generic | 13,188 | ---
tags:
- automatic-speech-recognition
library_name: generic
---
# pyctcdecode + Hugging Face model
Inspired on https://github.com/kensho-technologies/pyctcdecode/blob/main/tutorials/02_pipeline_huggingface.ipynb |
p208p2002/bart-drcd-qg-hl | 9f0aa94fe80c7de7b40c989f09b323b58f11403b | 2021-10-20T17:27:11.000Z | [
"pytorch",
"bart",
"text2text-generation",
"dataset:drcd",
"transformers",
"question-generation",
"autotrain_compatible"
]
| text2text-generation | false | p208p2002 | null | p208p2002/bart-drcd-qg-hl | 8 | null | transformers | 13,189 | ---
datasets:
- drcd
tags:
- question-generation
widget:
- text: "[HL]伊隆·里夫·馬斯克[HL]是一名企業家和商業大亨"
---
# Transformer QG on DRCD
請參閱 https://github.com/p208p2002/Transformer-QG-on-DRCD 獲得更多細節
The inputs of the model refers to
```
we integrate C and A into a new C' in the following form.
C' = [c1, c2, ..., [HL], a1, ..., a|A|, [HL], ..., c|C|]
```
> Proposed by [Ying-Hong Chan & Yao-Chung Fan. (2019). A Re-current BERT-based Model for Question Generation.](https://www.aclweb.org/anthology/D19-5821/)
## Features
- Fully pipline from fine-tune to evaluation
- Support most of state of the art models
- Fast deploy as a API server
## DRCD dataset
[台達閱讀理解資料集 Delta Reading Comprehension Dataset (DRCD)](https://github.com/DRCKnowledgeTeam/DRCD) 屬於通用領域繁體中文機器閱讀理解資料集。 DRCD資料集從2,108篇維基條目中整理出10,014篇段落,並從段落中標註出30,000多個問題。
## Available models
- BART (base on **[uer/bart-base-chinese-cluecorpussmall](https://huggingface.co/uer/bart-base-chinese-cluecorpussmall)**)
## Expriments
Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L|
------------------|------|------|------|------|------|-------|
BART-HLSQG |34.25 |27.70 |22.43 |18.13 |23.58 |36.88 |
## Environment requirements
The hole development is based on Ubuntu system
1. If you don't have pytorch 1.6+ please install or update first
> https://pytorch.org/get-started/locally/
2. Install packages `pip install -r requirements.txt`
3. Setup scorer `python setup_scorer.py`
5. Download dataset `python init_dataset.py`
## Training
### Seq2Seq LM
```
usage: train_seq2seq_lm.py [-h]
[--base_model {facebook/bart-base,facebook/bart-large,t5-small,t5-base,t5-large}]
[-d {squad,squad-nqg}] [--epoch EPOCH] [--lr LR]
[--dev DEV] [--server] [--run_test]
[-fc FROM_CHECKPOINT]
optional arguments:
-h, --help show this help message and exit
--base_model {facebook/bart-base,facebook/bart-large,t5-small,t5-base,t5-large}
-d {squad,squad-nqg}, --dataset {squad,squad-nqg}
--epoch EPOCH
--lr LR
--dev DEV
--server
--run_test
-fc FROM_CHECKPOINT, --from_checkpoint FROM_CHECKPOINT
```
## Deploy
### Start up
```
python train_seq2seq_lm.py --server --base_model YOUR_BASE_MODEL --from_checkpoint FROM_CHECKPOINT
```
### Request example
```
curl --location --request POST 'http://127.0.0.1:5000/' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'context=[HL]伊隆·里夫·馬斯克[HL]是一名企業家和商業大亨'
```
```json
{"predict": "哪一個人是一名企業家和商業大亨?"}
```
|
para-zhou/cunlp-bert-case-uncased | a534b24ac1dbaded81d624b2cf25c55531abc464 | 2021-05-20T02:17:20.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | para-zhou | null | para-zhou/cunlp-bert-case-uncased | 8 | null | transformers | 13,190 | Entry not found |
patrickvonplaten/wav2vec2-300m-mls-german-ft | 1446457071cdb2e28b50e47917b6be50b4af2a82 | 2021-11-18T22:30:26.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:multilingual_librispeech",
"transformers",
"multilingual_librispeech",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-300m-mls-german-ft | 8 | 1 | transformers | 13,191 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- multilingual_librispeech
- generated_from_trainer
datasets:
- multilingual_librispeech
model-index:
- name: wav2vec2-300m-mls-german-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-300m-mls-german-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MULTILINGUAL_LIBRISPEECH - GERMAN 10h dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2398
- Wer: 0.1520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 3.0132 | 7.25 | 500 | 2.9393 | 1.0 |
| 2.9241 | 14.49 | 1000 | 2.8734 | 1.0 |
| 1.0766 | 21.74 | 1500 | 0.2773 | 0.2488 |
| 0.8416 | 28.99 | 2000 | 0.2224 | 0.1990 |
| 0.8048 | 36.23 | 2500 | 0.2063 | 0.1792 |
| 0.7664 | 43.48 | 3000 | 0.2088 | 0.1748 |
| 0.6571 | 50.72 | 3500 | 0.2042 | 0.1668 |
| 0.7014 | 57.97 | 4000 | 0.2136 | 0.1649 |
| 0.6171 | 65.22 | 4500 | 0.2139 | 0.1641 |
| 0.6609 | 72.46 | 5000 | 0.2144 | 0.1621 |
| 0.6318 | 79.71 | 5500 | 0.2129 | 0.1600 |
| 0.6222 | 86.96 | 6000 | 0.2124 | 0.1582 |
| 0.608 | 94.2 | 6500 | 0.2255 | 0.1639 |
| 0.6099 | 101.45 | 7000 | 0.2265 | 0.1622 |
| 0.6069 | 108.7 | 7500 | 0.2246 | 0.1593 |
| 0.5929 | 115.94 | 8000 | 0.2323 | 0.1617 |
| 0.6218 | 123.19 | 8500 | 0.2287 | 0.1566 |
| 0.5751 | 130.43 | 9000 | 0.2275 | 0.1563 |
| 0.5181 | 137.68 | 9500 | 0.2316 | 0.1579 |
| 0.6306 | 144.93 | 10000 | 0.2372 | 0.1556 |
| 0.5874 | 152.17 | 10500 | 0.2362 | 0.1533 |
| 0.5546 | 159.42 | 11000 | 0.2342 | 0.1543 |
| 0.6294 | 166.67 | 11500 | 0.2381 | 0.1536 |
| 0.5989 | 173.91 | 12000 | 0.2360 | 0.1527 |
| 0.5697 | 181.16 | 12500 | 0.2399 | 0.1526 |
| 0.5379 | 188.41 | 13000 | 0.2375 | 0.1523 |
| 0.5022 | 195.65 | 13500 | 0.2395 | 0.1519 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/xls-r-300-sv-cv7 | bd0c3229e85b32cc9979552e2db11bba5ac27b48 | 2022-03-23T18:27:10.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"sv",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/xls-r-300-sv-cv7 | 8 | null | transformers | 13,192 | ---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
- sv
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Swedish - CV7 - v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 15.99
- name: Test CER
type: cer
value: 5.2
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 24.41
- name: Test CER
type: cer
value: 11.88
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2604
- Wer: 0.2334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 1
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
See Tensorboard
### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id patrickvonplaten/xls-r-300-sv-cv7 --dataset mozilla-foundation/common_voice_7_0 --config sv-SE --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id patrickvonplaten/xls-r-300-sv-cv7 --dataset speech-recognition-community-v2/dev_data --config sv --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.18.4.dev0
- Tokenizers 0.10.3
|
pbmstrk/t5-large-arxiv-title-abstract | cd201b57012d79506f1601db78d8d1a1ae1ac52d | 2021-06-23T13:22:10.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | pbmstrk | null | pbmstrk/t5-large-arxiv-title-abstract | 8 | null | transformers | 13,193 | Entry not found |
peterhsu/bert-finetuned-ner | b298a7a94cf7748edfa08498824e1d9482d25a5a | 2022-01-26T10:44:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | peterhsu | null | peterhsu/bert-finetuned-ner | 8 | null | transformers | 13,194 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9315407456285054
- name: Recall
type: recall
value: 0.9503534163581285
- name: F1
type: f1
value: 0.9408530489836722
- name: Accuracy
type: accuracy
value: 0.9861511744275033
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0615
- Precision: 0.9315
- Recall: 0.9504
- F1: 0.9409
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.084 | 1.0 | 1756 | 0.0683 | 0.9173 | 0.9347 | 0.9259 | 0.9826 |
| 0.0342 | 2.0 | 3512 | 0.0602 | 0.9312 | 0.9470 | 0.9390 | 0.9856 |
| 0.0236 | 3.0 | 5268 | 0.0615 | 0.9315 | 0.9504 | 0.9409 | 0.9862 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
pgperrone/roberta-base-bne-finetuned-amazon_reviews_multi | 78b4c70157ef471e405eadce270e1894d1069dc9 | 2021-11-01T19:16:08.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | pgperrone | null | pgperrone/roberta-base-bne-finetuned-amazon_reviews_multi | 8 | null | transformers | 13,195 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.93125
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2259
- Accuracy: 0.9313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1996 | 1.0 | 1250 | 0.1736 | 0.9297 |
| 0.1031 | 2.0 | 2500 | 0.2259 | 0.9313 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
pierreant-p/autonlp-jcvd-or-linkedin-3471039 | 73d52ac1afdc5f8d1aa6cc28c3dd0454bbd79c1c | 2021-07-14T19:02:50.000Z | [
"pytorch",
"camembert",
"text-classification",
"fr",
"dataset:pierreant-p/autonlp-data-jcvd-or-linkedin",
"transformers",
"autonlp"
]
| text-classification | false | pierreant-p | null | pierreant-p/autonlp-jcvd-or-linkedin-3471039 | 8 | 1 | transformers | 13,196 | ---
tags: autonlp
language: fr
widget:
- text: "I love AutoNLP 🤗"
datasets:
- pierreant-p/autonlp-data-jcvd-or-linkedin
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 3471039
## Validation Metrics
- Loss: 0.6704344749450684
- Accuracy: 0.59375
- Macro F1: 0.37254901960784315
- Micro F1: 0.59375
- Weighted F1: 0.4424019607843137
- Macro Precision: 0.296875
- Micro Precision: 0.59375
- Weighted Precision: 0.3525390625
- Macro Recall: 0.5
- Micro Recall: 0.59375
- Weighted Recall: 0.59375
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/pierreant-p/autonlp-jcvd-or-linkedin-3471039
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("pierreant-p/autonlp-jcvd-or-linkedin-3471039", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("pierreant-p/autonlp-jcvd-or-linkedin-3471039", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
prajjwal1/ctrl_discovery_1 | 6cf8ada5621dc8e94a4d048122b8d19eca7eac49 | 2021-03-05T03:08:03.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
]
| text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_1 | 8 | null | transformers | 13,197 | Entry not found |
pszemraj/pegasus-large-book-summary | d16d450423318a3bd57ffa8e4d174c5c6fe32a2b | 2022-01-30T01:04:30.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"en",
"dataset:kmfoda/booksum",
"transformers",
"summarization",
"license:apache-2.0",
"autotrain_compatible"
]
| summarization | false | pszemraj | null | pszemraj/pegasus-large-book-summary | 8 | null | transformers | 13,198 | ---
language:
- en
tags:
- summarization
- pegasus
license: apache-2.0
datasets:
- kmfoda/booksum
metrics:
- rouge
widget:
- text: "large earthquakes along a given fault segment do not occur at random intervals because it takes time to accumulate the strain energy for the rupture. The rates at which tectonic plates move and accumulate strain at their boundaries are approximately uniform. Therefore, in first approximation, one may expect that large ruptures of the same fault segment will occur at approximately constant time intervals. If subsequent main shocks have different amounts of slip across the fault, then the recurrence time may vary, and the basic idea of periodic mainshocks must be modified. For great plate boundary ruptures the length and slip often vary by a factor of 2. Along the southern segment of the San Andreas fault the recurrence interval is 145 years with variations of several decades. The smaller the standard deviation of the average recurrence interval, the more specific could be the long term prediction of a future mainshock."
example_title: "earthquakes"
- text: " A typical feed-forward neural field algorithm. Spatiotemporal coordinates are fed into a neural network that predicts values in the reconstructed domain. Then, this domain is mapped to the sensor domain where sensor measurements are available as supervision. Class and Section Problems Addressed Generalization (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid Representations (Section 3) Computation & memory efficiency, representation capacity, editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section 5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section 6) Edit ability, constraints, regularization. Table 2: The five classes of techniques in the neural field toolbox each addresses problems that arise in learning, inference, and control. (Section 3). We can supervise reconstruction via differentiable forward maps that transform Or project our domain (e.g, 3D reconstruction via 2D images; Section 4) With appropriate network architecture choices, we can overcome neural network spectral biases (blurriness) and efficiently compute derivatives and integrals (Section 5). Finally, we can manipulate neural fields to add constraints and regularizations, and to achieve editable representations (Section 6). Collectively, these classes constitute a 'toolbox' of techniques to help solve problems with neural fields There are three components in a conditional neural field: (1) An encoder or inference function € that outputs the conditioning latent variable 2 given an observation 0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS a latent code Or feature code_ (2) A mapping function 4 between Z and neural field parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the most probable z given the observations O: argmaxz P(2/0). The decoder maximizes the inverse conditional probability to find the most probable 0 given Z: arg- max P(Olz). We discuss different encoding schemes with different optimality guarantees (Section 2.1.1), both global and local conditioning (Section 2.1.2), and different mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable prior over the sur- face in its reconstruction domain to generalize to the partial observations. A neural network expresses a prior via the function space of its architecture and parameters 0, and generalization is influenced by the inductive bias of this function space (Section 5)."
example_title: "scientific paper"
- text: " the big variety of data coming from diverse sources is one of the key properties of the big data phenomenon. It is, therefore, beneficial to understand how data is generated in various environments and scenarios, before looking at what should be done with this data and how to design the best possible architecture to accomplish this The evolution of IT architectures, described in Chapter 2, means that the data is no longer processed by a few big monolith systems, but rather by a group of services In parallel to the processing layer, the underlying data storage has also changed and became more distributed This, in turn, required a significant paradigm shift as the traditional approach to transactions (ACID) could no longer be supported. On top of this, cloud computing is becoming a major approach with the benefits of reducing costs and providing on-demand scalability but at the same time introducing concerns about privacy, data ownership, etc In the meantime the Internet continues its exponential growth: Every day both structured and unstructured data is published and available for processing: To achieve competitive advantage companies have to relate their corporate resources to external services, e.g. financial markets, weather forecasts, social media, etc While several of the sites provide some sort of API to access the data in a more orderly fashion; countless sources require advanced web mining and Natural Language Processing (NLP) processing techniques: Advances in science push researchers to construct new instruments for observing the universe O conducting experiments to understand even better the laws of physics and other domains. Every year humans have at their disposal new telescopes, space probes, particle accelerators, etc These instruments generate huge streams of data, which need to be stored and analyzed. The constant drive for efficiency in the industry motivates the introduction of new automation techniques and process optimization: This could not be done without analyzing the precise data that describe these processes. As more and more human tasks are automated, machines provide rich data sets, which can be analyzed in real-time to drive efficiency to new levels. Finally, it is now evident that the growth of the Internet of Things is becoming a major source of data. More and more of the devices are equipped with significant computational power and can generate a continuous data stream from their sensors. In the subsequent sections of this chapter, we will look at the domains described above to see what they generate in terms of data sets. We will compare the volumes but will also look at what is characteristic and important from their respective points of view. 3.1 The Internet is undoubtedly the largest database ever created by humans. While several well described; cleaned, and structured data sets have been made available through this medium, most of the resources are of an ambiguous, unstructured, incomplete or even erroneous nature. Still, several examples in the areas such as opinion mining, social media analysis, e-governance, etc, clearly show the potential lying in these resources. Those who can successfully mine and interpret the Internet data can gain unique insight and competitive advantage in their business An important area of data analytics on the edge of corporate IT and the Internet is Web Analytics."
example_title: "data science textbook"
inference:
parameters:
max_length: 64
no_repeat_ngram_size: 2
encoder_no_repeat_ngram_size: 3
repetition_penalty: 2.4
length_penalty: 0.5
num_beams: 4
early_stopping: True
---
# checkpoints
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the [booksum](https://github.com/salesforce/booksum) dataset.
## Model description
More information needed
## Intended uses & limitations
- standard pegasus has a max input length of 1024 tokens, therefore the model only saw the first 1024 tokens of a chapter when training, and learned to try to make the chapter's summary from that. Keep this in mind when using this model, as information at the end of a text sequence longer than 1024 tokens may be excluded from the final summary/the model will be biased towards information presented first.
- this was only trained on the dataset for an epoch but still provides reasonable results.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.10.3
|
rajratnpranesh/DCS_sanskrit_distilbert | c92da98a728778ed7c3ac515d62dbec6843df90a | 2021-05-20T03:53:33.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | rajratnpranesh | null | rajratnpranesh/DCS_sanskrit_distilbert | 8 | null | transformers | 13,199 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.