modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingtweets/lavanyaai | 1104994108d8b9c0c335401666cc3e421eb9606d | 2021-05-22T11:42:16.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/lavanyaai | 4 | null | transformers | 18,700 | ---
language: en
thumbnail: http://www.huggingtweets.com/lavanyaai/1600320144154/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/1302839376909488128/fPooODvu_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Lavanya 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@lavanyaai bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@lavanyaai's tweets](https://twitter.com/lavanyaai).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3187</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1482</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>220</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1485</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/1s4lpnmf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lavanyaai's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/6zcv33k4) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/6zcv33k4/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/lavanyaai'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/mattwalshblog | 87a6eb0668f4229350cf331c63889d8dce17c243 | 2021-08-28T16:15:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/mattwalshblog | 4 | null | transformers | 18,701 | ---
language: en
thumbnail: https://www.huggingtweets.com/mattwalshblog/1630167154915/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1389695100045959168/WIluCszp_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Matt Walsh</div>
<div style="text-align: center; font-size: 14px;">@mattwalshblog</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Matt Walsh.
| Data | Matt Walsh |
| --- | --- |
| Tweets downloaded | 3240 |
| Retweets | 716 |
| Short tweets | 71 |
| Tweets kept | 2453 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2gnxwrlk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mattwalshblog's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/uvdejb5p) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/uvdejb5p/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mattwalshblog')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/mralgore | 72491d7409bf61b33c6b4db8b3f23728534f6390 | 2021-07-09T06:46:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/mralgore | 4 | null | transformers | 18,702 | ---
language: en
thumbnail: https://www.huggingtweets.com/mralgore/1625813191802/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1379330213042065410/XmWaaQtK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mr. Al Gore 🇺🇸 🏗</div>
<div style="text-align: center; font-size: 14px;">@mralgore</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mr. Al Gore 🇺🇸 🏗.
| Data | Mr. Al Gore 🇺🇸 🏗 |
| --- | --- |
| Tweets downloaded | 1663 |
| Retweets | 48 |
| Short tweets | 409 |
| Tweets kept | 1206 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/lb6ro1nm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mralgore's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2hcr10go) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2hcr10go/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mralgore')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/nftmansa | 00c86ef6309065a7240eed3ac86308ee2758ec97 | 2021-08-18T21:04:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/nftmansa | 4 | null | transformers | 18,703 | ---
language: en
thumbnail: https://www.huggingtweets.com/nftmansa/1629320654994/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1398377108007755781/nmudFxl3_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">NFT</div>
<div style="text-align: center; font-size: 14px;">@nftmansa</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from NFT.
| Data | NFT |
| --- | --- |
| Tweets downloaded | 3223 |
| Retweets | 3037 |
| Short tweets | 36 |
| Tweets kept | 150 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wwiy7t0n/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nftmansa's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/b9rzi99p) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/b9rzi99p/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nftmansa')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/patrick_exo | b81b0e771a334920d7f5b432485fb520796681e5 | 2021-05-22T18:08:35.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/patrick_exo | 4 | null | transformers | 18,704 | ---
language: en
thumbnail: https://www.huggingtweets.com/patrick_exo/1616890694033/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1094064355363250177/pggQx93t_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Patrick N 🤖 AI Bot </div>
<div style="font-size: 15px">@patrick_exo bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@patrick_exo's tweets](https://twitter.com/patrick_exo).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3233 |
| Retweets | 476 |
| Short tweets | 269 |
| Tweets kept | 2488 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2a0ktkyk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @patrick_exo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2weililh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2weililh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/patrick_exo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/realbenfishbein | fd6cd2015cbfa11f066dd11faddd0716f51288e8 | 2021-07-24T05:27:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/realbenfishbein | 4 | null | transformers | 18,705 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1349511600974278662/7v0yTYob_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ben Fishbein</div>
<div style="text-align: center; font-size: 14px;">@realbenfishbein</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ben Fishbein.
| Data | Ben Fishbein |
| --- | --- |
| Tweets downloaded | 261 |
| Retweets | 8 |
| Short tweets | 30 |
| Tweets kept | 223 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2idreqex/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @realbenfishbein's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3me55h26) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3me55h26/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/realbenfishbein')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/robertodcrsj | 44a5029f832a24920088aa74b740647d5c2571b0 | 2021-05-22T21:15:44.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/robertodcrsj | 4 | null | transformers | 18,706 | ---
language: en
thumbnail: http://res.cloudinary.com/huggingtweets/image/upload/v1600086568/robertodcrsj.jpg
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/1096124734440394752/2UhdoXP3_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Roberto 🤖 💻 📉 🐍💙 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@robertodcrsj bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@robertodcrsj's tweets](https://twitter.com/robertodcrsj).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>483</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>302</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>26</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>155</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/3fi4a9v5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @robertodcrsj's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/3gsz62al) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/3gsz62al/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/robertodcrsj'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/stephencurry30 | fc62796864024baa39bf7ec7ec0339a9e1384544 | 2022-04-02T22:43:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/stephencurry30 | 4 | null | transformers | 18,707 | ---
language: en
thumbnail: http://www.huggingtweets.com/stephencurry30/1648939428122/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1484233608793518081/tOID8aXq_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Stephen Curry</div>
<div style="text-align: center; font-size: 14px;">@stephencurry30</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Stephen Curry.
| Data | Stephen Curry |
| --- | --- |
| Tweets downloaded | 3190 |
| Retweets | 384 |
| Short tweets | 698 |
| Tweets kept | 2108 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2n8n86da/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @stephencurry30's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/24mjh4p6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/24mjh4p6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/stephencurry30')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/theofficetv | 2f6075cccd48b9e457f2b96a583399e1af3c083e | 2021-09-14T23:33:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/theofficetv | 4 | null | transformers | 18,708 | ---
language: en
thumbnail: https://www.huggingtweets.com/theofficetv/1631662381899/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1397240738493001729/Unk8D_yT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Office on Peacock</div>
<div style="text-align: center; font-size: 14px;">@theofficetv</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from The Office on Peacock.
| Data | The Office on Peacock |
| --- | --- |
| Tweets downloaded | 3215 |
| Retweets | 459 |
| Short tweets | 592 |
| Tweets kept | 2164 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3dwxnzp9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @theofficetv's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1mnr0e28) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1mnr0e28/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/theofficetv')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/titusoneeeeil | ee29d992271a764c492a257219ae860e74da7355 | 2021-05-23T02:28:15.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/titusoneeeeil | 4 | null | transformers | 18,709 | ---
language: en
thumbnail: https://www.huggingtweets.com/titusoneeeeil/1618617702995/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1381694077788422147/gxj1pLW2_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Tart Sophistry 🤖 AI Bot </div>
<div style="font-size: 15px">@titusoneeeeil bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@titusoneeeeil's tweets](https://twitter.com/titusoneeeeil).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 338 |
| Retweets | 32 |
| Short tweets | 43 |
| Tweets kept | 263 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4hpwbrd2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @titusoneeeeil's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/23b9ala1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/23b9ala1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/titusoneeeeil')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/vsshole | 6e5043a403f0eede5b2d3206fc15b800ad09c32a | 2022-05-10T21:24:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/vsshole | 4 | null | transformers | 18,710 | ---
language: en
thumbnail: http://www.huggingtweets.com/vsshole/1652217847985/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1475160033826586625/ZGf3YqfN_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🌺 m ny 🐝🐙</div>
<div style="text-align: center; font-size: 14px;">@vsshole</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🌺 m ny 🐝🐙.
| Data | 🌺 m ny 🐝🐙 |
| --- | --- |
| Tweets downloaded | 3221 |
| Retweets | 382 |
| Short tweets | 1727 |
| Tweets kept | 1112 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3f393wuv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vsshole's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/29sa4yhp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/29sa4yhp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vsshole')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/weedsle | f2bb9f6fc1088941cc081254fce4e8256c29f700 | 2021-06-24T03:44:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/weedsle | 4 | null | transformers | 18,711 | ---
language: en
thumbnail: https://www.huggingtweets.com/weedsle/1624506233926/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1405834432234364928/41kQSLqT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Kingus🔞</div>
<div style="text-align: center; font-size: 14px;">@weedsle</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Kingus🔞.
| Data | Kingus🔞 |
| --- | --- |
| Tweets downloaded | 1219 |
| Retweets | 270 |
| Short tweets | 157 |
| Tweets kept | 792 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ozegyos/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @weedsle's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2igdgxfs) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2igdgxfs/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/weedsle')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huyue012/wav2vec2-base-cynthia-tedlium-2500-v2 | aaf373ff9f66a6adc47cd35f5feb63e8abacf40e | 2021-11-19T04:09:16.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | huyue012 | null | huyue012/wav2vec2-base-cynthia-tedlium-2500-v2 | 4 | null | transformers | 18,712 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-cynthia-tedlium-2500-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-cynthia-tedlium-2500-v2
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6425
- Wer: 0.2033
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1196 | 6.58 | 500 | 0.6498 | 0.2103 |
| 0.1176 | 13.16 | 1000 | 0.6490 | 0.2169 |
| 0.1227 | 19.73 | 1500 | 0.6241 | 0.2127 |
| 0.1078 | 26.31 | 2000 | 0.6359 | 0.2118 |
| 0.0956 | 32.89 | 2500 | 0.6330 | 0.2073 |
| 0.1008 | 39.47 | 3000 | 0.6816 | 0.2036 |
| 0.09 | 46.05 | 3500 | 0.6425 | 0.2033 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
|
hyerim/distilbert-base-uncased-finetuned-ner | 45e43e3deebc594e3032e1b7fd0af411ab2757e4 | 2022-02-15T08:37:29.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | hyerim | null | hyerim/distilbert-base-uncased-finetuned-ner | 4 | null | transformers | 18,713 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9273570324574961
- name: Recall
type: recall
value: 0.9397024275646045
- name: F1
type: f1
value: 0.9334889148191365
- name: Accuracy
type: accuracy
value: 0.9837641190207635
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0617
- Precision: 0.9274
- Recall: 0.9397
- F1: 0.9335
- Accuracy: 0.9838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2403 | 1.0 | 878 | 0.0714 | 0.9171 | 0.9216 | 0.9193 | 0.9805 |
| 0.0555 | 2.0 | 1756 | 0.0604 | 0.9206 | 0.9347 | 0.9276 | 0.9829 |
| 0.031 | 3.0 | 2634 | 0.0617 | 0.9274 | 0.9397 | 0.9335 | 0.9838 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.7.1
- Datasets 1.18.3
- Tokenizers 0.10.1
|
hyyoka/wav2vec2-xlsr-korean-senior | 9c00dc1ccac66c7486406bbd8ab89a92d38966f7 | 2022-01-28T06:08:19.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"kr",
"dataset:aihub 자유대화 음성(노인남녀)",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | hyyoka | null | hyyoka/wav2vec2-xlsr-korean-senior | 4 | null | transformers | 18,714 | ---
language: kr
datasets:
- aihub 자유대화 음성(노인남녀)
tags:
- automatic-speech-recognition
license: apache-2.0
---
# wav2vec2-xlsr-korean-senior
Futher fine-tuned [fleek/wav2vec-large-xlsr-korean](https://huggingface.co/fleek/wav2vec-large-xlsr-korean) using the [AIhub 자유대화 음성(노인남녀)](https://aihub.or.kr/aidata/30704).
- Total train data size: 808,642
- Total vaild data size: 159,970
When using this model, make sure that your speech input is sampled at 16kHz.
The script used for training can be found here: https://github.com/hyyoka/wav2vec2-korean-senior
### Inference
``` py
import torchaudio
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import re
def clean_up(transcription):
hangul = re.compile('[^ ㄱ-ㅣ가-힣]+')
result = hangul.sub('', transcription)
return result
model_name "hyyoka/wav2vec2-xlsr-korean-senior"
processor = Wav2Vec2Processor.from_pretrained(model_name)
model = Wav2Vec2ForCTC.from_pretrained(model_name)
speech_array, sampling_rate = torchaudio.load(wav_file)
feat = processor(speech_array[0],
sampling_rate=16000,
padding=True,
max_length=800000,
truncation=True,
return_attention_mask=True,
return_tensors="pt",
pad_token_id=49
)
input = {'input_values': feat['input_values'],'attention_mask':feat['attention_mask']}
outputs = model(**input, output_attentions=True)
logits = outputs.logits
predicted_ids = logits.argmax(axis=-1)
transcription = processor.decode(predicted_ids[0])
stt_result = clean_up(transcription)
```
|
ikevin98/bert-base-uncased-sst2-distilled | 204f39867b42a7bbcb4fab7e43f7da6d05c1e579 | 2021-08-12T14:03:32.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-classification | false | ikevin98 | null | ikevin98/bert-base-uncased-sst2-distilled | 4 | null | transformers | 18,715 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model_index:
name: bert-base-uncased-sst2-distilled
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-sst2-distilled
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2676
- Accuracy: 0.9025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3797 | 1.0 | 2105 | 0.2512 | 0.9002 |
| 0.3036 | 2.0 | 4210 | 0.2643 | 0.8933 |
| 0.2609 | 3.0 | 6315 | 0.2831 | 0.8956 |
| 0.2417 | 4.0 | 8420 | 0.2676 | 0.9025 |
| 0.2305 | 5.0 | 10525 | 0.2740 | 0.9025 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.1
|
ikevin98/bert-base-uncased-sst2-membership-attack | a879ec2c2e71bde8a005fda85e2f448f91d2b44e | 2021-09-12T15:14:11.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-classification | false | ikevin98 | null | ikevin98/bert-base-uncased-sst2-membership-attack | 4 | null | transformers | 18,716 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model_index:
name: bert-base-uncased-sst2-membership-attack
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-sst2-membership-attack
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6296
- Accuracy: 0.8681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6921 | 1.0 | 3813 | 0.6263 | 0.8360 |
| 0.6916 | 2.0 | 7626 | 0.6296 | 0.8681 |
| 0.6904 | 3.0 | 11439 | 0.6105 | 0.8406 |
| 0.6886 | 4.0 | 15252 | 0.6192 | 0.8200 |
| 0.6845 | 5.0 | 19065 | 0.6250 | 0.7798 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.1
|
indridinn/distilbert-base-uncased-finetuned-ner | fbcc692e4a78f2e53edb1ff4af0c9d9ecba8b451 | 2021-10-01T22:29:15.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | indridinn | null | indridinn/distilbert-base-uncased-finetuned-ner | 4 | null | transformers | 18,717 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9274720407485328
- name: Recall
type: recall
value: 0.9370175634858485
- name: F1
type: f1
value: 0.932220367278798
- name: Accuracy
type: accuracy
value: 0.9836370279759162
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0610
- Precision: 0.9275
- Recall: 0.9370
- F1: 0.9322
- Accuracy: 0.9836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2507 | 1.0 | 878 | 0.0714 | 0.9181 | 0.9243 | 0.9212 | 0.9813 |
| 0.0516 | 2.0 | 1756 | 0.0617 | 0.9208 | 0.9325 | 0.9266 | 0.9828 |
| 0.0306 | 3.0 | 2634 | 0.0610 | 0.9275 | 0.9370 | 0.9322 | 0.9836 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
infinitejoy/wav2vec2-large-xls-r-300m-breton | 4b89b33f57438aee6d3e781426b458a7f8011752 | 2022-03-23T18:33:01.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"br",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | infinitejoy | null | infinitejoy/wav2vec2-large-xls-r-300m-breton | 4 | null | transformers | 18,718 | ---
language:
- br
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Breton
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: br
metrics:
- name: Test WER
type: wer
value: 107.955
- name: Test CER
type: cer
value: 379.33
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-breton
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6102
- Wer: 0.4455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9205 | 3.33 | 500 | 2.8659 | 1.0 |
| 1.6403 | 6.67 | 1000 | 0.9440 | 0.7593 |
| 1.3483 | 10.0 | 1500 | 0.7580 | 0.6215 |
| 1.2255 | 13.33 | 2000 | 0.6851 | 0.5722 |
| 1.1139 | 16.67 | 2500 | 0.6409 | 0.5220 |
| 1.0688 | 20.0 | 3000 | 0.6245 | 0.5055 |
| 0.99 | 23.33 | 3500 | 0.6142 | 0.4874 |
| 0.9345 | 26.67 | 4000 | 0.5946 | 0.4829 |
| 0.9058 | 30.0 | 4500 | 0.6229 | 0.4704 |
| 0.8683 | 33.33 | 5000 | 0.6153 | 0.4666 |
| 0.8367 | 36.67 | 5500 | 0.5952 | 0.4542 |
| 0.8162 | 40.0 | 6000 | 0.6030 | 0.4541 |
| 0.8042 | 43.33 | 6500 | 0.5972 | 0.4485 |
| 0.7836 | 46.67 | 7000 | 0.6070 | 0.4497 |
| 0.7556 | 50.0 | 7500 | 0.6102 | 0.4455 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-lithuanian | 00fd19637efbf149d18d762b64967bdf5eee76e6 | 2022-03-24T11:58:06.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"lt",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | infinitejoy | null | infinitejoy/wav2vec2-large-xls-r-300m-lithuanian | 4 | null | transformers | 18,719 | ---
language:
- lt
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- lt
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Lithuanian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: lt
metrics:
- name: Test WER
type: wer
value: 24.859
- name: Test CER
type: cer
value: 4.764
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-lithuanian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - LT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Wer: 0.2486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.6837 | 8.0 | 2000 | 0.6649 | 0.7515 |
| 1.1105 | 16.0 | 4000 | 0.2386 | 0.3436 |
| 1.0069 | 24.0 | 6000 | 0.2008 | 0.2968 |
| 0.9417 | 32.0 | 8000 | 0.1915 | 0.2774 |
| 0.887 | 40.0 | 10000 | 0.1819 | 0.2616 |
| 0.8563 | 48.0 | 12000 | 0.1729 | 0.2475 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
irvingpop/dreambank | 70a11df1448e85c8bfcd7b833b5f81222872e82d | 2021-05-23T05:34:04.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | irvingpop | null | irvingpop/dreambank | 4 | null | transformers | 18,720 | Entry not found |
ismaelardo/BETO_3d | 191d2374b38e26ebf3104226ed756207b7d08c21 | 2021-10-11T18:50:46.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ismaelardo | null | ismaelardo/BETO_3d | 4 | null | transformers | 18,721 | Este es el primer modelo de prueba BETO_3D |
it5/it5-large-formal-to-informal | 5403483eb81159ece9fd3e74c56a3b2553192975 | 2022-03-09T07:46:17.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:yahoo/xformal_it",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"style-transfer",
"formality-style-transfer",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | it5 | null | it5/it5-large-formal-to-informal | 4 | null | transformers | 18,722 | ---
language:
- it
license: apache-2.0
tags:
- italian
- sequence-to-sequence
- style-transfer
- formality-style-transfer
datasets:
- yahoo/xformal_it
widget:
- text: "Questa performance è a dir poco spiacevole."
- text: "In attesa di un Suo cortese riscontro, Le auguriamo un piacevole proseguimento di giornata."
- text: "Questa visione mi procura una goduria indescrivibile."
- text: "qualora ciò possa interessarti, ti pregherei di contattarmi."
metrics:
- rouge
- bertscore
model-index:
- name: it5-large-formal-to-informal
results:
- task:
type: formality-style-transfer
name: "Formal-to-informal Style Transfer"
dataset:
type: xformal_it
name: "XFORMAL (Italian Subset)"
metrics:
- type: rouge1
value: 0.611
name: "Avg. Test Rouge1"
- type: rouge2
value: 0.409
name: "Avg. Test Rouge2"
- type: rougeL
value: 0.586
name: "Avg. Test RougeL"
- type: bertscore
value: 0.613
name: "Avg. Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
co2_eq_emissions:
emissions: "51g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
---
# IT5 Large for Formal-to-informal Style Transfer 🤗
This repository contains the checkpoint for the [IT5 Large](https://huggingface.co/gsarti/it5-large) model fine-tuned on Formal-to-informal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
f2i = pipeline("text2text-generation", model='it5/it5-large-formal-to-informal')
f2i("Vi ringrazio infinitamente per vostra disponibilità")
>>> [{"generated_text": "e grazie per la vostra disponibilità!"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-large-formal-to-informal")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-large-formal-to-informal")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
it5/mt5-small-formal-to-informal | 4f5f37750996656fa9f96f7dac169cbc10c5fe6b | 2022-03-09T07:44:42.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"mt5",
"text2text-generation",
"it",
"dataset:yahoo/xformal_it",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"style-transfer",
"formality-style-transfer",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | it5 | null | it5/mt5-small-formal-to-informal | 4 | null | transformers | 18,723 | ---
language:
- it
license: apache-2.0
tags:
- italian
- sequence-to-sequence
- style-transfer
- formality-style-transfer
datasets:
- yahoo/xformal_it
widget:
- text: "Questa performance è a dir poco spiacevole."
- text: "In attesa di un Suo cortese riscontro, Le auguriamo un piacevole proseguimento di giornata."
- text: "Questa visione mi procura una goduria indescrivibile."
- text: "qualora ciò possa interessarti, ti pregherei di contattarmi."
metrics:
- rouge
- bertscore
model-index:
- name: mt5-small-formal-to-informal
results:
- task:
type: formality-style-transfer
name: "Formal-to-informal Style Transfer"
dataset:
type: xformal_it
name: "XFORMAL (Italian Subset)"
metrics:
- type: rouge1
value: 0.651
name: "Avg. Test Rouge1"
- type: rouge2
value: 0.450
name: "Avg. Test Rouge2"
- type: rougeL
value: 0.631
name: "Avg. Test RougeL"
- type: bertscore
value: 0.666
name: "Avg. Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
co2_eq_emissions:
emissions: "17g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
---
# mT5 Small for Formal-to-informal Style Transfer 🤗
This repository contains the checkpoint for the [mT5 Small](https://huggingface.co/google/mt5-small) model fine-tuned on Formal-to-informal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
f2i = pipeline("text2text-generation", model='it5/mt5-small-formal-to-informal')
f2i("Vi ringrazio infinitamente per vostra disponibilità")
>>> [{"generated_text": "e grazie per la vostra disponibilità!"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/mt5-small-formal-to-informal")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-small-formal-to-informal")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
izumi-lab/electra-small-japanese-generator | bc628256863d34f1aa6df9ef9a405607f979152b | 2022-03-19T09:39:43.000Z | [
"pytorch",
"electra",
"fill-mask",
"ja",
"dataset:wikipedia",
"arxiv:2003.10555",
"transformers",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | izumi-lab | null | izumi-lab/electra-small-japanese-generator | 4 | null | transformers | 18,724 | ---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: 東京大学で[MASK]の研究をしています。
---
# ELECTRA small Japanese generator
This is a [ELECTRA](https://github.com/google-research/electra) model pretrained on texts in the Japanese language.
The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0).
## Model architecture
The model architecture is the same as ELECTRA small in the [original ELECTRA implementation](https://github.com/google-research/electra); 12 layers, 256 dimensions of hidden states, and 4 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Japanese version of Wikipedia, using Wikipedia dump file as of June 1, 2021.
The corpus file is 2.9GB, consisting of approximately 20M sentences.
## Tokenization
The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
## Training
The models are trained with the same configuration as ELECTRA small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555) except size; 128 tokens per instance, 128 instances per batch, and 1M training steps.
The size of the generator is the same of the discriminator.
## Citation
**There will be another paper for this pretrained model. Be sure to check here again when you cite.**
```
@inproceedings{suzuki2021fin-bert-electra,
title={金融文書を用いた事前学習言語モデルの構築と検証},
% title={Construction and Validation of a Pre-Trained Language Model Using Financial Documents},
author={鈴木 雅弘 and 坂地 泰紀 and 平野 正徳 and 和泉 潔},
% author={Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
booktitle={人工知能学会第27回金融情報学研究会(SIG-FIN)},
% booktitle={Proceedings of JSAI Special Interest Group on Financial Infomatics (SIG-FIN) 27},
pages={5-10},
year={2021}
}
```
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP21K12010.
|
jambo/microsoftBio-renet | 25237a5a3713e6db8ebaf50416dd661e86eaeb3d | 2021-07-15T11:41:27.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:renet",
"transformers",
"generated_from_trainer",
"license:mit"
] | text-classification | false | jambo | null | jambo/microsoftBio-renet | 4 | null | transformers | 18,725 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- renet
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-renet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: renet
type: renet
metric:
name: Accuracy
type: accuracy
value: 0.8640646029609691
---
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-renet
A model for detecting gene disease associations from abstracts. The model classifies as 0 for no association, or 1 for some association.
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the [RENET2](https://github.com/sujunhao/RENET2) dataset. Note that this considers only the abstract data, and not the full text information, from RENET2.
It achieves the following results on the evaluation set:
- Loss: 0.7226
- Precision: 0.7799
- Recall: 0.8211
- F1: 0.8
- Accuracy: 0.8641
- Auc: 0.9325
## Training procedure
The abstract dataset from RENET2 was split into 85% train, 15% evaluation being grouped by PMIDs and stratified by labels. That is, no data from the same PMID was seen in multiple both the training and the evaluation set.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.9.0.dev0
- Pytorch 1.10.0.dev20210630+cu113
- Datasets 1.8.0
- Tokenizers 0.10.3
|
jamescalam/bert-stsb-gold | cf7db3d2554fe7b6522db92a0a7c62cb06880bd6 | 2021-12-17T08:57:06.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | jamescalam | null | jamescalam/bert-stsb-gold | 4 | null | sentence-transformers | 18,726 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Gold-only BERT STSb
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
It is used as a demo model within the [NLP for Semantic Search course](https://www.pinecone.io/learn/nlp), for the chapter on [In-domain Data Augmentation with BERT](https://www.pinecone.io/learn/data-augmentation/).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('bert-stsb-gold')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('bert-stsb-gold')
model = AutoModel.from_pretrained('bert-stsb-gold')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 36,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
|
jannesg/takalane_tso_roberta | b00754d531d85baf66238450b313b1c298dfd2b1 | 2021-09-22T08:52:13.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"ts",
"transformers",
"masked-lm",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | jannesg | null | jannesg/takalane_tso_roberta | 4 | null | transformers | 18,727 | ---
language:
- ts
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- ts
- fill-mask
- pytorch
- roberta
- masked-lm
license: mit
---
# Takalani Sesame - Tsonga 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_tso_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_tso_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 20000
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
jasonwu/ToD-BERT-jnt | 9a8f1d54228d49925598f0da4c3a4e0fe243ab67 | 2021-05-19T20:38:18.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jasonwu | null | jasonwu/ToD-BERT-jnt | 4 | null | transformers | 18,728 | Entry not found |
jcblaise/distilbert-tagalog-base-cased | 5f6564b196e6869af9e9cb5bfcac09b63ae03219 | 2021-11-12T03:20:40.000Z | [
"pytorch",
"jax",
"distilbert",
"tl",
"transformers",
"bert",
"tagalog",
"filipino",
"license:gpl-3.0"
] | null | false | jcblaise | null | jcblaise/distilbert-tagalog-base-cased | 4 | null | transformers | 18,729 | ---
language: tl
tags:
- distilbert
- bert
- tagalog
- filipino
license: gpl-3.0
inference: false
---
**Deprecation Notice**
This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available.
Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance.
---
# DistilBERT Tagalog Base Cased
Tagalog version of DistilBERT, distilled from [`bert-tagalog-base-cased`](https://huggingface.co/jcblaise/bert-tagalog-base-cased). This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
## Usage
The model can be loaded and used in both PyTorch and TensorFlow through the HuggingFace Transformers package.
```python
from transformers import TFAutoModel, AutoModel, AutoTokenizer
# TensorFlow
model = TFAutoModel.from_pretrained('jcblaise/distilbert-tagalog-base-cased', from_pt=True)
tokenizer = AutoTokenizer.from_pretrained('jcblaise/distilbert-tagalog-base-cased', do_lower_case=False)
# PyTorch
model = AutoModel.from_pretrained('jcblaise/distilbert-tagalog-base-cased')
tokenizer = AutoTokenizer.from_pretrained('jcblaise/distilbert-tagalog-base-cased', do_lower_case=False)
```
Finetuning scripts and other utilities we use for our projects can be found in our centralized repository at https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@article{cruz2020establishing,
title={Establishing Baselines for Text Classification in Low-Resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:2005.02068},
year={2020}
}
@article{cruz2019evaluating,
title={Evaluating Language Model Finetuning Techniques for Low-resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:1907.00409},
year={2019}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_100_Epochs | d5bd386db77a099bcac7517416dff7f571c8446f | 2022-02-14T22:15:16.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_100_Epochs | 4 | null | sentence-transformers | 18,730 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 110,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_5_Epochs | b32d4cf9eaa9e8d58facd04244a68d66fcdd1ae3 | 2022-02-14T20:57:30.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_5_Epochs | 4 | null | sentence-transformers | 18,731 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jhonparra18/wav2vec2-xls-r-300m-spanish-large-noLM | 71931a15d9f1fcb937966d863b0fe1b06edf3bdb | 2022-02-08T13:27:14.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"es",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jhonparra18 | null | jhonparra18/wav2vec2-xls-r-300m-spanish-large-noLM | 4 | null | transformers | 18,732 | ---
license: apache-2.0
tags:
- generated_from_trainer
- "es"
- "robust-speech-event"
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-spanish-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-spanish-large
This model is a fine-tuned version of [tomascufaro/xls-r-es-test](https://huggingface.co/tomascufaro/xls-r-es-test) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1431
- Wer: 0.1197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1769 | 0.15 | 400 | 0.1795 | 0.1698 |
| 0.217 | 0.3 | 800 | 0.2000 | 0.1945 |
| 0.2372 | 0.45 | 1200 | 0.1985 | 0.1859 |
| 0.2351 | 0.6 | 1600 | 0.1901 | 0.1772 |
| 0.2269 | 0.75 | 2000 | 0.1968 | 0.1783 |
| 0.2284 | 0.9 | 2400 | 0.1873 | 0.1771 |
| 0.2014 | 1.06 | 2800 | 0.1840 | 0.1696 |
| 0.1988 | 1.21 | 3200 | 0.1904 | 0.1730 |
| 0.1919 | 1.36 | 3600 | 0.1827 | 0.1630 |
| 0.1919 | 1.51 | 4000 | 0.1788 | 0.1629 |
| 0.1817 | 1.66 | 4400 | 0.1755 | 0.1558 |
| 0.1812 | 1.81 | 4800 | 0.1795 | 0.1638 |
| 0.1808 | 1.96 | 5200 | 0.1762 | 0.1603 |
| 0.1625 | 2.11 | 5600 | 0.1721 | 0.1557 |
| 0.1477 | 2.26 | 6000 | 0.1735 | 0.1504 |
| 0.1508 | 2.41 | 6400 | 0.1708 | 0.1478 |
| 0.157 | 2.56 | 6800 | 0.1644 | 0.1466 |
| 0.1491 | 2.71 | 7200 | 0.1638 | 0.1445 |
| 0.1458 | 2.86 | 7600 | 0.1582 | 0.1426 |
| 0.1387 | 3.02 | 8000 | 0.1607 | 0.1376 |
| 0.1269 | 3.17 | 8400 | 0.1559 | 0.1364 |
| 0.1172 | 3.32 | 8800 | 0.1521 | 0.1335 |
| 0.1203 | 3.47 | 9200 | 0.1534 | 0.1330 |
| 0.1177 | 3.62 | 9600 | 0.1485 | 0.1304 |
| 0.1167 | 3.77 | 10000 | 0.1498 | 0.1302 |
| 0.1194 | 3.92 | 10400 | 0.1463 | 0.1287 |
| 0.1053 | 4.07 | 10800 | 0.1483 | 0.1282 |
| 0.098 | 4.22 | 11200 | 0.1498 | 0.1267 |
| 0.0958 | 4.37 | 11600 | 0.1461 | 0.1233 |
| 0.0946 | 4.52 | 12000 | 0.1444 | 0.1218 |
| 0.094 | 4.67 | 12400 | 0.1434 | 0.1206 |
| 0.0932 | 4.82 | 12800 | 0.1424 | 0.1206 |
| 0.0912 | 4.98 | 13200 | 0.1431 | 0.1197 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
ji-xin/bert_base-QNLI-two_stage | eede6aefdd70a390295497311425d29d025e5576 | 2020-07-08T14:53:19.000Z | [
"pytorch",
"transformers"
] | null | false | ji-xin | null | ji-xin/bert_base-QNLI-two_stage | 4 | null | transformers | 18,733 | Entry not found |
ji-xin/bert_base-SST2-two_stage | eb03428e6460e612afa52ccfc22cfc15056f527b | 2020-07-08T14:54:44.000Z | [
"pytorch",
"transformers"
] | null | false | ji-xin | null | ji-xin/bert_base-SST2-two_stage | 4 | null | transformers | 18,734 | Entry not found |
ji-xin/roberta_base-QNLI-two_stage | 035feaba4ee7f396738ca644077beb5a5a4694cc | 2020-07-08T15:06:38.000Z | [
"pytorch",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ji-xin | null | ji-xin/roberta_base-QNLI-two_stage | 4 | null | transformers | 18,735 | Entry not found |
jinlmsft/t5-large-slots | fa6c747d8f0588811697ba5556baf57551878595 | 2022-02-08T04:01:53.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | jinlmsft | null | jinlmsft/t5-large-slots | 4 | null | transformers | 18,736 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-large-slots
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large-slots
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0889
- Acc: 0.76
- True Num: 11167
- Num: 14748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | True Num | Num |
|:-------------:|:-----:|:-----:|:---------------:|:----:|:--------:|:-----:|
| 0.3539 | 0.56 | 1000 | 0.2669 | 0.56 | 8264 | 14748 |
| 0.2523 | 1.13 | 2000 | 0.2031 | 0.56 | 8317 | 14748 |
| 0.2003 | 1.69 | 3000 | 0.1498 | 0.58 | 8496 | 14748 |
| 0.1609 | 2.25 | 4000 | 0.1284 | 0.58 | 8612 | 14748 |
| 0.1431 | 2.82 | 5000 | 0.1119 | 0.59 | 8675 | 14748 |
| 0.1236 | 3.38 | 6000 | 0.1054 | 0.59 | 8737 | 14748 |
| 0.1172 | 3.95 | 7000 | 0.0981 | 0.59 | 8773 | 14748 |
| 0.1027 | 4.51 | 8000 | 0.0955 | 0.6 | 8787 | 14748 |
| 0.0968 | 5.07 | 9000 | 0.0931 | 0.6 | 8807 | 14748 |
| 0.0911 | 5.64 | 10000 | 0.0895 | 0.6 | 8787 | 14748 |
| 0.0852 | 6.2 | 11000 | 0.0912 | 0.6 | 8840 | 14748 |
| 0.0823 | 6.76 | 12000 | 0.0880 | 0.6 | 8846 | 14748 |
| 0.0768 | 7.33 | 13000 | 0.0915 | 0.6 | 8879 | 14748 |
| 0.0758 | 7.89 | 14000 | 0.0892 | 0.6 | 8853 | 14748 |
| 0.0708 | 8.46 | 15000 | 0.0885 | 0.6 | 8884 | 14748 |
| 0.0701 | 9.02 | 16000 | 0.0884 | 0.6 | 8915 | 14748 |
| 0.0685 | 9.58 | 17000 | 0.0884 | 0.6 | 8921 | 14748 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
jinmang2/beit-large-patch16-224-dapt-facemask | f15fe89ed7ffb86fc464f59196b9da9361fbb149 | 2021-09-02T04:53:40.000Z | [
"pytorch",
"beit",
"transformers"
] | null | false | jinmang2 | null | jinmang2/beit-large-patch16-224-dapt-facemask | 4 | null | transformers | 18,737 | Entry not found |
jkgrad/longformer-base-stsb | eca0ccf504bfe616fab38c4b8eb85c48522bcc20 | 2021-02-04T07:57:06.000Z | [
"pytorch",
"longformer",
"text-classification",
"transformers"
] | text-classification | false | jkgrad | null | jkgrad/longformer-base-stsb | 4 | null | transformers | 18,738 | Entry not found |
jky594176/BART1_GRU | 8b7b8684d4ac3d26a86cc833526a081cb9ba7d0e | 2021-05-30T12:59:07.000Z | [
"pytorch",
"bart",
"text-generation",
"transformers"
] | text-generation | false | jky594176 | null | jky594176/BART1_GRU | 4 | null | transformers | 18,739 | Entry not found |
jky594176/recipe_BART1_NN | 97a2dc4bc5933fa48d31587f0c04fae972bce1bf | 2021-05-30T15:16:55.000Z | [
"pytorch",
"bart",
"text-generation",
"transformers"
] | text-generation | false | jky594176 | null | jky594176/recipe_BART1_NN | 4 | null | transformers | 18,740 | Entry not found |
joaomiguel26/xlm-roberta-10-final | 04602dc8525d44e10e8ac0654e9fb292b344218c | 2021-12-06T16:26:38.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | joaomiguel26 | null | joaomiguel26/xlm-roberta-10-final | 4 | null | transformers | 18,741 | Entry not found |
joelito/gbert-base-ler | 8a452000984a742a439839134877adebc83f24bc | 2021-05-19T20:51:41.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | joelito | null | joelito/gbert-base-ler | 4 | null | transformers | 18,742 | # gbert-base-ler
Task: ler
Base Model: deepset/gbert-base
Trained for 3 epochs
Batch-size: 6
Seed: 42
Test F1-Score: 0.956 |
jpabbuehl/distilbert-base-uncased-finetuned-cola | c78c7d22e2bac73bf44ca9d39bb251c8ba98ed0d | 2021-11-25T08:49:51.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jpabbuehl | null | jpabbuehl/distilbert-base-uncased-finetuned-cola | 4 | null | transformers | 18,743 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5229586822934302
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7588
- Matthews Correlation: 0.5230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5261 | 1.0 | 535 | 0.5125 | 0.4124 |
| 0.3502 | 2.0 | 1070 | 0.5439 | 0.5076 |
| 0.2378 | 3.0 | 1605 | 0.6629 | 0.4946 |
| 0.1809 | 4.0 | 2140 | 0.7588 | 0.5230 |
| 0.1309 | 5.0 | 2675 | 0.8901 | 0.5056 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
justin871030/bert-base-uncased-goemotions-ekman | 96e2b8198a8936856c52501aebe40fdbd98494d3 | 2022-01-08T09:52:51.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | justin871030 | null | justin871030/bert-base-uncased-goemotions-ekman | 4 | null | transformers | 18,744 | Entry not found |
kaedefuto/chat_bot | 25bda482eb9d5f0a2bf68e13e1f0332a564b8ef9 | 2021-09-07T14:25:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | kaedefuto | null | kaedefuto/chat_bot | 4 | null | transformers | 18,745 | Entry not found |
kapilchauhan/distilbert-base-uncased-finetuned-cola | 7ebbc5820d83445dd5e59ae8f6db08bf1d2cb24d | 2022-02-24T12:29:36.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | kapilchauhan | null | kapilchauhan/distilbert-base-uncased-finetuned-cola | 4 | null | transformers | 18,746 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5135743708561838
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7696
- Matthews Correlation: 0.5136
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5284 | 1.0 | 535 | 0.4948 | 0.4093 |
| 0.3529 | 2.0 | 1070 | 0.5135 | 0.4942 |
| 0.2417 | 3.0 | 1605 | 0.6303 | 0.5083 |
| 0.1818 | 4.0 | 2140 | 0.7696 | 0.5136 |
| 0.1302 | 5.0 | 2675 | 0.8774 | 0.5123 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
kloon99/KML_Software_License_v1 | 6cb4c613c557b5808aa92acd8339e8356bb4dc56 | 2021-09-26T10:44:14.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | kloon99 | null | kloon99/KML_Software_License_v1 | 4 | null | transformers | 18,747 | {'C0': 'audit_rights',
'C1': 'licensee_indemnity',
'C2': 'licensor_indemnity',
'C3': 'license_grant',
'C4': 'eula_others',
'C5': 'licensee_infringement_indemnity',
'C6': 'licensor_exemption_liability',
'C7': 'licensor_limit_liabilty',
'C8': 'software_warranty'} |
koala/bert-base-german-dbmdz-uncased-de | d80f7a22934822eb16547516b74a0615f63f2bdf | 2021-12-10T09:30:47.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | koala | null | koala/bert-base-german-dbmdz-uncased-de | 4 | null | transformers | 18,748 | Entry not found |
korca/bae-roberta-base-boolq | aa2f1e5cf59c3fda7f3881ba60e2f0d28cf5f307 | 2022-02-01T07:29:15.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | korca | null | korca/bae-roberta-base-boolq | 4 | null | transformers | 18,749 | Entry not found |
korca/bae-roberta-base-mrpc | 5b978bac6dd3945216badf0b4d74fb55ee6797bb | 2022-02-02T04:46:58.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | korca | null | korca/bae-roberta-base-mrpc | 4 | null | transformers | 18,750 | Entry not found |
korca/bae-roberta-base-rte-5 | b6df5e2d606c598b513a37fd3ac82bd0d8b9a1d5 | 2022-02-04T16:19:11.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | korca | null | korca/bae-roberta-base-rte-5 | 4 | null | transformers | 18,751 | Entry not found |
korca/bae-roberta-base-rte | 3d947ed4420de7e412aa59f0a463e4bba4ae481d | 2022-02-02T04:53:59.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | korca | null | korca/bae-roberta-base-rte | 4 | null | transformers | 18,752 | Entry not found |
korca/bert-base-mnli | 28eb1602820974680f57f59316368794b96db944 | 2021-12-06T07:12:40.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | korca | null | korca/bert-base-mnli | 4 | null | transformers | 18,753 | Entry not found |
korca/textfooler-roberta-base-sst2 | e9782ae19318c479a57bc66d80279f5d936ca476 | 2022-01-31T15:38:40.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | korca | null | korca/textfooler-roberta-base-sst2 | 4 | null | transformers | 18,754 | Entry not found |
krevas/finance-koelectra-small-generator | 780b58509cd7a1b674620793b8b4a5489581f098 | 2020-12-11T21:48:37.000Z | [
"pytorch",
"electra",
"fill-mask",
"ko",
"transformers",
"autotrain_compatible"
] | fill-mask | false | krevas | null | krevas/finance-koelectra-small-generator | 4 | null | transformers | 18,755 | ---
language: ko
---
# 📈 Financial Korean ELECTRA model
Pretrained ELECTRA Language Model for Korean (`finance-koelectra-small-generator`)
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a financial news data of Naver news.
The final training corpus has a size of 25GB and 2.3B tokens.
This model was trained a cased model on a TITAN RTX for 500k steps.
## Usage
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="krevas/finance-koelectra-small-generator",
tokenizer="krevas/finance-koelectra-small-generator"
)
print(fill_mask(f"내일 해당 종목이 대폭 {fill_mask.tokenizer.mask_token}할 것이다."))
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
|
LACAI/roberta-large-dialog-narrative | 5f8a7709a2dd59d52f9bc90a615ecce776fc13fa | 2021-11-08T22:20:03.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | LACAI | null | LACAI/roberta-large-dialog-narrative | 4 | 1 | transformers | 18,756 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: output_mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_mlm
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.5832 | 0.19 | 15000 | 1.4992 |
| 1.5325 | 0.39 | 30000 | 1.4653 |
| 1.4979 | 0.58 | 45000 | 1.4359 |
| 1.4715 | 0.77 | 60000 | 1.4039 |
| 1.4448 | 0.97 | 75000 | 1.3877 |
| 1.4191 | 1.16 | 90000 | 1.3603 |
| 1.3988 | 1.35 | 105000 | 1.3425 |
| 1.3699 | 1.54 | 120000 | 1.3230 |
| 1.3493 | 1.74 | 135000 | 1.3012 |
| 1.3201 | 1.93 | 150000 | 1.2773 |
| 1.2993 | 2.12 | 165000 | 1.2617 |
| 1.2745 | 2.32 | 180000 | 1.2490 |
| 1.2614 | 2.51 | 195000 | 1.2283 |
| 1.2424 | 2.7 | 210000 | 1.2152 |
| 1.2296 | 2.9 | 225000 | 1.2052 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
lagodw/plotly_gpt2_medium | 50408788c0e908d0b53951c2aa873967a973b8a4 | 2021-10-21T15:18:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | lagodw | null | lagodw/plotly_gpt2_medium | 4 | null | transformers | 18,757 | Entry not found |
lagodw/redditbot_gpt2_xl | ba5ce2258e334f7e2019980c4b02fc0a425e2c95 | 2021-10-04T18:21:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | lagodw | null | lagodw/redditbot_gpt2_xl | 4 | null | transformers | 18,758 | Entry not found |
laurauzcategui/xlm-roberta-base-finetuned-marc-en | 7a0a1355b1fefab378428fa7e0c42a12a66d145d | 2021-10-22T13:20:51.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | laurauzcategui | null | laurauzcategui/xlm-roberta-base-finetuned-marc-en | 4 | null | transformers | 18,759 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8945
- Mae: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 1.1411 | 1.0 | 235 | 0.9358 | 0.5 |
| 0.9653 | 2.0 | 470 | 0.8945 | 0.5 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
laurievb/distilbert-base-uncased-finetuned-ner | f77fd07ae72f71512a761300e54f604a01b2f076 | 2021-08-16T09:37:45.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | laurievb | null | laurievb/distilbert-base-uncased-finetuned-ner | 4 | null | transformers | 18,760 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.9841453921553053
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0593
- Precision: 0.9257
- Recall: 0.9377
- F1: 0.9316
- Accuracy: 0.9841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2402 | 1.0 | 878 | 0.0699 | 0.9129 | 0.9195 | 0.9162 | 0.9810 |
| 0.0524 | 2.0 | 1756 | 0.0589 | 0.9220 | 0.9385 | 0.9301 | 0.9836 |
| 0.0296 | 3.0 | 2634 | 0.0593 | 0.9257 | 0.9377 | 0.9316 | 0.9841 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
leeyujin/distilbert-base-uncased-finetuned-cola | fcd4bf290c37b516aa00ac7c42e2996726f74b0a | 2022-02-07T07:08:04.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | leeyujin | null | leeyujin/distilbert-base-uncased-finetuned-cola | 4 | null | transformers | 18,761 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5062132225102124
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5608
- Matthews Correlation: 0.5062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 134 | 0.4851 | 0.4301 |
| No log | 2.0 | 268 | 0.4619 | 0.4891 |
| No log | 3.0 | 402 | 0.5447 | 0.4965 |
| 0.3828 | 4.0 | 536 | 0.5608 | 0.5062 |
| 0.3828 | 5.0 | 670 | 0.5702 | 0.5029 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
leonadase/distilbert-base-uncased-finetuned-ner | 356dfbc1bfd5c8a33e0de951dcede9508d35472a | 2022-02-14T13:51:21.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | leonadase | null | leonadase/distilbert-base-uncased-finetuned-ner | 4 | null | transformers | 18,762 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9210439378923027
- name: Recall
type: recall
value: 0.9356751314464705
- name: F1
type: f1
value: 0.9283018867924528
- name: Accuracy
type: accuracy
value: 0.983176322938345
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9210
- Recall: 0.9357
- F1: 0.9283
- Accuracy: 0.9832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2341 | 1.0 | 878 | 0.0734 | 0.9118 | 0.9206 | 0.9162 | 0.9799 |
| 0.0546 | 2.0 | 1756 | 0.0591 | 0.9210 | 0.9350 | 0.9279 | 0.9829 |
| 0.0297 | 3.0 | 2634 | 0.0611 | 0.9210 | 0.9357 | 0.9283 | 0.9832 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
lewtun/MiniLM-L12-H384-uncased-finetuned-imdb | 95050a6c9141fd2d6ebdf76541ea3836b558ba6d | 2021-09-28T18:59:38.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | lewtun | null | lewtun/MiniLM-L12-H384-uncased-finetuned-imdb | 4 | null | transformers | 18,763 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: MiniLM-L12-H384-uncased-finetuned-imdb
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: imdb
type: imdb
args: plain_text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLM-L12-H384-uncased-finetuned-imdb
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.2464 | 1.0 | 391 | 4.2951 |
| 4.2302 | 2.0 | 782 | 4.0023 |
| 4.0726 | 3.0 | 1173 | 3.9328 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.1+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
lewtun/xlm-roberta-base-finetuned-marc-500-samples | 2358f0bf0043c9424a95ec14b81ec12d652b88a4 | 2021-10-12T15:12:51.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | lewtun | null | lewtun/xlm-roberta-base-finetuned-marc-500-samples | 4 | null | transformers | 18,764 | ---
tags:text-classification
--- |
lewtun/xlm-roberta-base-finetuned-marc-en-dummy | b1f6c54f687cb1d3e241ca66509a8ba5448d59a4 | 2021-10-21T20:03:13.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | lewtun | null | lewtun/xlm-roberta-base-finetuned-marc-en-dummy | 4 | null | transformers | 18,765 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en-dummy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en-dummy
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8931
- Mae: 0.4634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1258 | 1.0 | 235 | 0.9538 | 0.4390 |
| 0.9445 | 2.0 | 470 | 0.8931 | 0.4634 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
lgris/sew-tiny-portuguese-cv8 | 05d9a87f886f0eac586785a3ffd071e4d1cfe802 | 2022-03-23T18:29:00.000Z | [
"pytorch",
"tensorboard",
"sew",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lgris | null | lgris/sew-tiny-portuguese-cv8 | 4 | null | transformers | 18,766 | ---
language:
- pt
license: apache-2.0
tags:
- generated_from_trainer
- hf-asr-leaderboard
- pt
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sew-tiny-portuguese-cv8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: pt
metrics:
- name: Test WER
type: wer
value: 33.71
- name: Test CER
type: cer
value: 10.69
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 52.79
- name: Test CER
type: cer
value: 20.98
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pt
metrics:
- name: Test WER
type: wer
value: 53.18
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: pt
metrics:
- name: Test WER
type: wer
value: 55.23
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-tiny-portuguese-cv8
This model is a fine-tuned version of [lgris/sew-tiny-pt](https://huggingface.co/lgris/sew-tiny-pt) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4082
- Wer: 0.3053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 40000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 1.93 | 1000 | 2.9134 | 0.9767 |
| 2.9224 | 3.86 | 2000 | 2.8405 | 0.9789 |
| 2.9224 | 5.79 | 3000 | 2.8094 | 0.9800 |
| 2.8531 | 7.72 | 4000 | 2.7439 | 0.9891 |
| 2.8531 | 9.65 | 5000 | 2.7057 | 1.0159 |
| 2.7721 | 11.58 | 6000 | 2.7235 | 1.0709 |
| 2.7721 | 13.51 | 7000 | 2.5931 | 1.1035 |
| 2.6566 | 15.44 | 8000 | 2.2171 | 0.9884 |
| 2.6566 | 17.37 | 9000 | 1.2399 | 0.8081 |
| 1.9558 | 19.31 | 10000 | 0.9045 | 0.6353 |
| 1.9558 | 21.24 | 11000 | 0.7705 | 0.5533 |
| 1.4987 | 23.17 | 12000 | 0.7068 | 0.5165 |
| 1.4987 | 25.1 | 13000 | 0.6641 | 0.4718 |
| 1.3811 | 27.03 | 14000 | 0.6043 | 0.4470 |
| 1.3811 | 28.96 | 15000 | 0.5532 | 0.4268 |
| 1.2897 | 30.89 | 16000 | 0.5371 | 0.4101 |
| 1.2897 | 32.82 | 17000 | 0.5924 | 0.4150 |
| 1.225 | 34.75 | 18000 | 0.4949 | 0.3894 |
| 1.225 | 36.68 | 19000 | 0.5591 | 0.4045 |
| 1.193 | 38.61 | 20000 | 0.4927 | 0.3731 |
| 1.193 | 40.54 | 21000 | 0.4922 | 0.3712 |
| 1.1482 | 42.47 | 22000 | 0.4799 | 0.3662 |
| 1.1482 | 44.4 | 23000 | 0.4846 | 0.3648 |
| 1.1201 | 46.33 | 24000 | 0.4770 | 0.3623 |
| 1.1201 | 48.26 | 25000 | 0.4530 | 0.3426 |
| 1.0892 | 50.19 | 26000 | 0.4523 | 0.3527 |
| 1.0892 | 52.12 | 27000 | 0.4573 | 0.3443 |
| 1.0583 | 54.05 | 28000 | 0.4488 | 0.3353 |
| 1.0583 | 55.98 | 29000 | 0.4295 | 0.3285 |
| 1.0319 | 57.92 | 30000 | 0.4321 | 0.3220 |
| 1.0319 | 59.85 | 31000 | 0.4244 | 0.3236 |
| 1.0076 | 61.78 | 32000 | 0.4197 | 0.3201 |
| 1.0076 | 63.71 | 33000 | 0.4230 | 0.3208 |
| 0.9851 | 65.64 | 34000 | 0.4090 | 0.3127 |
| 0.9851 | 67.57 | 35000 | 0.4088 | 0.3133 |
| 0.9695 | 69.5 | 36000 | 0.4123 | 0.3088 |
| 0.9695 | 71.43 | 37000 | 0.4017 | 0.3090 |
| 0.9514 | 73.36 | 38000 | 0.4184 | 0.3086 |
| 0.9514 | 75.29 | 39000 | 0.4075 | 0.3043 |
| 0.944 | 77.22 | 40000 | 0.4082 | 0.3053 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
lgris/wav2vec2-xls-r-1b-cv8 | a4316d1d5945e66ca098f22159f22ec6a7f870ca | 2022-03-23T18:29:59.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lgris | null | lgris/wav2vec2-xls-r-1b-cv8 | 4 | null | transformers | 18,767 | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- pt
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-1b-cv8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: pt
metrics:
- name: Test WER
type: wer
value: 17.7
- name: Test CER
type: cer
value: 5.21
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 45.68
- name: Test CER
type: cer
value: 18.67
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pt
metrics:
- name: Test WER
type: wer
value: 45.29
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: pt
metrics:
- name: Test WER
type: wer
value: 48.03
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2007
- Wer: 0.1838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.1172 | 0.32 | 500 | 1.2852 | 0.9783 |
| 1.4152 | 0.64 | 1000 | 0.6434 | 0.6105 |
| 1.4342 | 0.96 | 1500 | 0.4844 | 0.3989 |
| 1.4657 | 1.29 | 2000 | 0.5080 | 0.4490 |
| 1.4961 | 1.61 | 2500 | 0.4764 | 0.4264 |
| 1.4515 | 1.93 | 3000 | 0.4519 | 0.4068 |
| 1.3924 | 2.25 | 3500 | 0.4472 | 0.4132 |
| 1.4524 | 2.57 | 4000 | 0.4455 | 0.3939 |
| 1.4328 | 2.89 | 4500 | 0.4369 | 0.4069 |
| 1.3456 | 3.22 | 5000 | 0.4234 | 0.3774 |
| 1.3725 | 3.54 | 5500 | 0.4387 | 0.3789 |
| 1.3812 | 3.86 | 6000 | 0.4298 | 0.3825 |
| 1.3282 | 4.18 | 6500 | 0.4025 | 0.3703 |
| 1.3326 | 4.5 | 7000 | 0.3917 | 0.3502 |
| 1.3028 | 4.82 | 7500 | 0.3889 | 0.3582 |
| 1.293 | 5.14 | 8000 | 0.3859 | 0.3496 |
| 1.321 | 5.47 | 8500 | 0.3875 | 0.3576 |
| 1.3165 | 5.79 | 9000 | 0.3927 | 0.3589 |
| 1.2701 | 6.11 | 9500 | 0.4058 | 0.3621 |
| 1.2718 | 6.43 | 10000 | 0.4211 | 0.3916 |
| 1.2683 | 6.75 | 10500 | 0.3968 | 0.3620 |
| 1.2643 | 7.07 | 11000 | 0.4128 | 0.3848 |
| 1.2485 | 7.4 | 11500 | 0.3849 | 0.3727 |
| 1.2608 | 7.72 | 12000 | 0.3770 | 0.3474 |
| 1.2388 | 8.04 | 12500 | 0.3774 | 0.3574 |
| 1.2524 | 8.36 | 13000 | 0.3789 | 0.3550 |
| 1.2458 | 8.68 | 13500 | 0.3770 | 0.3410 |
| 1.2505 | 9.0 | 14000 | 0.3638 | 0.3403 |
| 1.2254 | 9.32 | 14500 | 0.3770 | 0.3509 |
| 1.2459 | 9.65 | 15000 | 0.3592 | 0.3349 |
| 1.2049 | 9.97 | 15500 | 0.3600 | 0.3428 |
| 1.2097 | 10.29 | 16000 | 0.3626 | 0.3347 |
| 1.1988 | 10.61 | 16500 | 0.3740 | 0.3269 |
| 1.1671 | 10.93 | 17000 | 0.3548 | 0.3245 |
| 1.1532 | 11.25 | 17500 | 0.3394 | 0.3140 |
| 1.1459 | 11.58 | 18000 | 0.3349 | 0.3156 |
| 1.1511 | 11.9 | 18500 | 0.3272 | 0.3110 |
| 1.1465 | 12.22 | 19000 | 0.3348 | 0.3084 |
| 1.1426 | 12.54 | 19500 | 0.3193 | 0.3027 |
| 1.1278 | 12.86 | 20000 | 0.3318 | 0.3021 |
| 1.149 | 13.18 | 20500 | 0.3169 | 0.2947 |
| 1.114 | 13.5 | 21000 | 0.3224 | 0.2986 |
| 1.1249 | 13.83 | 21500 | 0.3227 | 0.2921 |
| 1.0968 | 14.15 | 22000 | 0.3033 | 0.2878 |
| 1.0851 | 14.47 | 22500 | 0.2996 | 0.2863 |
| 1.0985 | 14.79 | 23000 | 0.3011 | 0.2843 |
| 1.0808 | 15.11 | 23500 | 0.2932 | 0.2759 |
| 1.069 | 15.43 | 24000 | 0.2919 | 0.2750 |
| 1.0602 | 15.76 | 24500 | 0.2959 | 0.2713 |
| 1.0369 | 16.08 | 25000 | 0.2931 | 0.2754 |
| 1.0573 | 16.4 | 25500 | 0.2920 | 0.2722 |
| 1.051 | 16.72 | 26000 | 0.2855 | 0.2632 |
| 1.0279 | 17.04 | 26500 | 0.2850 | 0.2649 |
| 1.0496 | 17.36 | 27000 | 0.2817 | 0.2585 |
| 1.0516 | 17.68 | 27500 | 0.2961 | 0.2635 |
| 1.0244 | 18.01 | 28000 | 0.2781 | 0.2589 |
| 1.0099 | 18.33 | 28500 | 0.2783 | 0.2565 |
| 1.0016 | 18.65 | 29000 | 0.2719 | 0.2537 |
| 1.0157 | 18.97 | 29500 | 0.2621 | 0.2449 |
| 0.9572 | 19.29 | 30000 | 0.2582 | 0.2427 |
| 0.9802 | 19.61 | 30500 | 0.2707 | 0.2468 |
| 0.9577 | 19.94 | 31000 | 0.2563 | 0.2389 |
| 0.9562 | 20.26 | 31500 | 0.2592 | 0.2382 |
| 0.962 | 20.58 | 32000 | 0.2539 | 0.2341 |
| 0.9541 | 20.9 | 32500 | 0.2505 | 0.2288 |
| 0.9587 | 21.22 | 33000 | 0.2486 | 0.2302 |
| 0.9146 | 21.54 | 33500 | 0.2461 | 0.2269 |
| 0.9215 | 21.86 | 34000 | 0.2387 | 0.2228 |
| 0.9105 | 22.19 | 34500 | 0.2405 | 0.2222 |
| 0.8949 | 22.51 | 35000 | 0.2316 | 0.2191 |
| 0.9153 | 22.83 | 35500 | 0.2358 | 0.2180 |
| 0.8907 | 23.15 | 36000 | 0.2369 | 0.2168 |
| 0.8973 | 23.47 | 36500 | 0.2323 | 0.2120 |
| 0.8878 | 23.79 | 37000 | 0.2293 | 0.2104 |
| 0.8818 | 24.12 | 37500 | 0.2302 | 0.2132 |
| 0.8919 | 24.44 | 38000 | 0.2262 | 0.2083 |
| 0.8473 | 24.76 | 38500 | 0.2257 | 0.2040 |
| 0.8516 | 25.08 | 39000 | 0.2246 | 0.2031 |
| 0.8451 | 25.4 | 39500 | 0.2198 | 0.2000 |
| 0.8288 | 25.72 | 40000 | 0.2199 | 0.1990 |
| 0.8465 | 26.05 | 40500 | 0.2165 | 0.1972 |
| 0.8305 | 26.37 | 41000 | 0.2128 | 0.1957 |
| 0.8202 | 26.69 | 41500 | 0.2127 | 0.1937 |
| 0.8223 | 27.01 | 42000 | 0.2100 | 0.1934 |
| 0.8322 | 27.33 | 42500 | 0.2076 | 0.1905 |
| 0.8139 | 27.65 | 43000 | 0.2054 | 0.1880 |
| 0.8299 | 27.97 | 43500 | 0.2026 | 0.1868 |
| 0.7937 | 28.3 | 44000 | 0.2045 | 0.1872 |
| 0.7972 | 28.62 | 44500 | 0.2025 | 0.1861 |
| 0.809 | 28.94 | 45000 | 0.2026 | 0.1858 |
| 0.813 | 29.26 | 45500 | 0.2013 | 0.1838 |
| 0.7718 | 29.58 | 46000 | 0.2010 | 0.1837 |
| 0.7929 | 29.9 | 46500 | 0.2008 | 0.1840 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
|
liaad/srl-pt_xlmr-base | 07030ec73ebc895a16423d30f8b34355e27e0861 | 2021-09-22T08:56:34.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"multilingual",
"pt",
"dataset:PropBank.Br",
"arxiv:2101.01213",
"transformers",
"xlm-roberta-base",
"semantic role labeling",
"finetuned",
"license:apache-2.0"
] | feature-extraction | false | liaad | null | liaad/srl-pt_xlmr-base | 4 | null | transformers | 18,768 | ---
language:
- multilingual
- pt
tags:
- xlm-roberta-base
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
metrics:
- F1 Measure
---
# XLM-R base fine-tuned on Portuguese semantic role labeling
## Model description
This model is the [`xlm-roberta-base`](https://huggingface.co/xlm-roberta-base) fine-tuned on Portuguese semantic role labeling data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-pt_xlmr-base")
model = AutoModel.from_pretrained("liaad/srl-pt_xlmr-base")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
## Training procedure
The model was trained on the PropBank.Br datasets, using 10-fold Cross-Validation. The 10 resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
liangtaiwan/bart-base-correct-mask-embedding | 9b29ac89e717950efc64eb88ac60632ec96fa0da | 2021-09-17T08:45:28.000Z | [
"pytorch",
"bart",
"feature-extraction",
"transformers"
] | feature-extraction | false | liangtaiwan | null | liangtaiwan/bart-base-correct-mask-embedding | 4 | null | transformers | 18,769 | Entry not found |
lighteternal/SSE-TUC-mt-el-en-cased | 65b8555c775ddbe1edecca4f4cf5371a01eb4146 | 2021-03-31T17:26:16.000Z | [
"pytorch",
"fsmt",
"text2text-generation",
"en",
"el",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | lighteternal | null | lighteternal/SSE-TUC-mt-el-en-cased | 4 | null | transformers | 18,770 | ---
language:
- en
- el
tags:
- translation
widget:
- text: "Ο όρος τεχνητή νοημοσύνη αναφέρεται στον κλάδο της πληροφορικής ο οποίος ασχολείται με τη σχεδίαση και την υλοποίηση υπολογιστικών συστημάτων που μιμούνται στοιχεία της ανθρώπινης συμπεριφοράς. "
license: apache-2.0
metrics:
- bleu
---
## Greek to English NMT
## By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
* source languages: el
* target languages: en
* licence: apache-2.0
* dataset: Opus, CCmatrix
* model: transformer(fairseq)
* pre-processing: tokenization + BPE segmentation
* metrics: bleu, chrf
### Model description
Trained using the Fairseq framework, transformer_iwslt_de_en architecture.\\
BPE segmentation (20k codes).\\
Mixed-case model.
### How to use
```
from transformers import FSMTTokenizer, FSMTForConditionalGeneration
mname = "lighteternal/SSE-TUC-mt-el-en-cased"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
text = "Ο όρος τεχνητή νοημοσύνη αναφέρεται στον κλάδο της πληροφορικής ο οποίος ασχολείται με τη σχεδίαση και την υλοποίηση υπολογιστικών συστημάτων που μιμούνται στοιχεία της ανθρώπινης συμπεριφοράς ."
encoded = tokenizer.encode(text, return_tensors='pt')
outputs = model.generate(encoded, num_beams=5, num_return_sequences=5, early_stopping=True)
for i, output in enumerate(outputs):
i += 1
print(f"{i}: {output.tolist()}")
decoded = tokenizer.decode(output, skip_special_tokens=True)
print(f"{i}: {decoded}")
```
## Training data
Consolidated corpus from Opus and CC-Matrix (~6.6GB in total)
## Eval results
Results on Tatoeba testset (EL-EN):
| BLEU | chrF |
| ------ | ------ |
| 79.3 | 0.795 |
Results on XNLI parallel (EL-EN):
| BLEU | chrF |
| ------ | ------ |
| 66.2 | 0.623 |
### BibTeX entry and citation info
Dimitris Papadopoulos, et al. "PENELOPIE: Enabling Open Information Extraction for the Greek Language through Machine Translation." (2021). Accepted at EACL 2021 SRW
### Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
|
luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_3e5_bb_lr_3e5_grad_adam | 13cc06bd18bfaec747f2c8a75976b962171a4c17 | 2021-10-30T02:19:29.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_3e5_bb_lr_3e5_grad_adam | 4 | null | transformers | 18,771 | Entry not found |
luigisbrother/wav2vec2-common_voice-tr-demo-dist | 94410eb6a1cb37888d732aa6c749960571c5aa71 | 2021-10-18T10:12:29.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | luigisbrother | null | luigisbrother/wav2vec2-common_voice-tr-demo-dist | 4 | null | transformers | 18,772 | Entry not found |
lumalik/vent-roberta-emotion | f2f301758031d78f6e6a0796077e9ae033b2f819 | 2021-08-31T10:16:58.000Z | [
"pytorch",
"roberta",
"text-classification",
"arxiv:1901.04856",
"transformers"
] | text-classification | false | lumalik | null | lumalik/vent-roberta-emotion | 4 | 1 | transformers | 18,773 | # Vent-roBERTa-emotion
This is a roBERTa pretrained on twitter and then trained for self-labeled emotion classification on the Vent dataset (see https://arxiv.org/abs/1901.04856). The Vent dataset contains 33 million posts annotated with one emotion by the user themselves. <br/>
The model was trained to recognize 5 emotions ("Affection", "Anger", "Fear", "Happiness", "Sadness") on 7 million posts from the dataset. <br/>
Example of how to use the classifier on single texts. <br/>
````
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import torch
tokenizer = AutoTokenizer.from_pretrained("lumalik/vent-roberta-emotion")
model = AutoModelForSequenceClassification.from_pretrained("lumalik/vent-roberta-emotion")
model.eval()
texts = ["You wont believe what happened to me today",
"You wont believe what happened to me today!",
"You wont believe what happened to me today...",
"You wont believe what happened to me today <3",
"You wont believe what happened to me today :)",
"You wont believe what happened to me today :("]
for text in texts:
encoded_text = tokenizer(text, return_tensors="pt")
output = model(**encoded_text)
output = softmax(output[0].detach().numpy(), axis=1)
print("======================")
print(text)
print("Affection: {}".format(output[0][0]))
print("Anger: {}".format(output[0][1]))
print("Fear: {}".format(output[0][2]))
print("Happiness: {}".format(output[0][3]))
print("Sadness: {}".format(output[0][4]))
```` |
lvwerra/bert-base-uncased-issues-128-issues-128 | 78415dd4b82abce0f2ca1e561ce0061ec20d4023 | 2021-10-27T22:51:47.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | lvwerra | null | lvwerra/bert-base-uncased-issues-128-issues-128 | 4 | null | transformers | 18,774 | Entry not found |
lysandre/arxiv | 6449932bb66ad5a8a72e1bd9ade6c365cabc59ef | 2021-05-23T08:44:27.000Z | [
"pytorch",
"jax",
"gpt2",
"transformers"
] | null | false | lysandre | null | lysandre/arxiv | 4 | null | transformers | 18,775 | # ArXiv GPT-2 checkpoint
This is a GPT-2 small checkpoint for PyTorch. It is the official `gpt2-small` finetuned to ArXiv paper on physics fields.
## Training data
This model was trained on a subset of ArXiv papers that were parsed from PDF to txt. The resulting data is made of 130MB of text, mostly from quantum physics (quant-ph) and other physics sub-fields.
|
lysandre/new-dummy-model | ba384e28b28bfc5300885d784fa0d6e8912501f2 | 2021-06-12T07:49:19.000Z | [
"pytorch",
"tf",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | lysandre | null | lysandre/new-dummy-model | 4 | null | transformers | 18,776 | # Dummy model
This is a dummy model. |
lysandre/tests | 5f89bea6acdf9cf23df74bac902820d0a32bc6f4 | 2021-06-17T06:55:09.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | lysandre | null | lysandre/tests | 4 | null | transformers | 18,777 | Entry not found |
lysandre/tiny-distil | 3d8f72f19066ac3e502ee3be04afe74b8611342f | 2021-06-17T07:47:21.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | lysandre | null | lysandre/tiny-distil | 4 | null | transformers | 18,778 | Entry not found |
m-lin20/satellite-instrument-roberta-NER | d78f556ddf702099cc95ad7451b222af91309192 | 2021-12-13T07:58:30.000Z | [
"pytorch",
"roberta",
"token-classification",
"pt",
"transformers",
"autotrain_compatible"
] | token-classification | false | m-lin20 | null | m-lin20/satellite-instrument-roberta-NER | 4 | 1 | transformers | 18,779 | ---
language: "pt"
widget:
- text: "Poised for launch in mid-2021, the joint NASA-USGS Landsat 9 mission will continue this important data record. In many respects Landsat 9 is a clone of Landsat-8. The Operational Land Imager-2 (OLI-2) is largely identical to Landsat 8 OLI, providing calibrated imagery covering the solar reflected wavelengths. The Thermal Infrared Sensor-2 (TIRS-2) improves upon Landsat 8 TIRS, addressing known issues including stray light incursion and a malfunction of the instrument scene select mirror. In addition, Landsat 9 adds redundancy to TIRS-2, thus upgrading the instrument to a 5-year design life commensurate with other elements of the mission. Initial performance testing of OLI-2 and TIRS-2 indicate that the instruments are of excellent quality and expected to match or improve on Landsat 8 data quality. "
example_title: "example 1"
- text: "Compared to its predecessor, Jason-3, the two AMR-C radiometer instruments have an external calibration system which enables higher radiometric stability accomplished by moving the secondary mirror between well-defined targets. Sentinel-6 allows continuing the study of the ocean circulation, climate change, and sea-level rise for at least another decade. Besides the external calibration for the AMR heritage radiometer (18.7, 23.8, and 34 GHz channels), the AMR-C contains a high-resolution microwave radiometer (HRMR) with radiometer channels at 90, 130, and 168 GHz. This subsystem allows for a factor of 5× higher spatial resolution at coastal transitions. This article presents a brief description of the instrument and the measured performance of the completed AMR-C-A and AMR-C-B instruments."
example_title: "example 2"
- text: "The Landsat 9 will continue the Landsat data record into its fifth decade with a near-copy build of Landsat 8 with launch scheduled for December 2020. The two instruments on Landsat 9 are Thermal Infrared Sensor-2 (TIRS-2) and Operational Land Imager-2 (OLI-2)."
example_title: "example 3"
inference:
parameters:
aggregation_strategy: "simple"
---
# satellite-instrument-roberta-NER
For details, please visit the [GitHub link](https://github.com/Tsinghua-mLin/satellite-instrument-NER). |
m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-binary | f5bbe4f6c33215bc1fede622864cf60ebac92ef9 | 2020-12-26T08:42:08.000Z | [
"pytorch",
"tf",
"albert",
"text-classification",
"fa",
"transformers",
"license:apache-2.0"
] | text-classification | false | m3hrdadfi | null | m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-binary | 4 | null | transformers | 18,780 | ---
language: fa
license: apache-2.0
---
# ALBERT Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
[ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT.
Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### DeepSentiPers
which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset.
**Binary:**
1. Negative (Furious + Angry)
2. Positive (Happy + Delighted)
**Multi**
1. Furious
2. Angry
3. Neutral
4. Happy
5. Delighted
| Label | # |
|:---------:|:----:|
| Furious | 236 |
| Angry | 1357 |
| Neutral | 2874 |
| Happy | 2848 |
| Delighted | 2516 |
**Download**
You can download the dataset from:
- [SentiPers](https://github.com/phosseini/sentipers)
- [DeepSentiPers](https://github.com/JoyeBright/DeepSentiPers)
## Results
The following table summarizes the F1 score obtained as compared to other models and architectures.
| Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:|
| SentiPers (Multi Class) | 66.12 | 71.11 | - | 69.33 |
| SentiPers (Binary Class) | 91.09 | 92.13 | - | 91.98 |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ALBERTPersian,
author = {Mehrdad Farahani},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. |
m3tafl0ps/autonlp-NLPIsFun-251844 | d115c71d65d7c40f0cf7fc4b2c9b71c935184891 | 2021-06-05T17:15:23.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:m3tafl0ps/autonlp-data-NLPIsFun",
"transformers",
"autonlp"
] | text-classification | false | m3tafl0ps | null | m3tafl0ps/autonlp-NLPIsFun-251844 | 4 | null | transformers | 18,781 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- m3tafl0ps/autonlp-data-NLPIsFun
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 251844
## Validation Metrics
- Loss: 0.38616305589675903
- Accuracy: 0.8356545961002786
- Precision: 0.8253968253968254
- Recall: 0.8571428571428571
- AUC: 0.9222387781709815
- F1: 0.8409703504043127
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/m3tafl0ps/autonlp-NLPIsFun-251844
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("m3tafl0ps/autonlp-NLPIsFun-251844", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("m3tafl0ps/autonlp-NLPIsFun-251844", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
macedonizer/sr-roberta-base | 2ff2fb34bd6561e1a1f79b76599a72b49457d3e7 | 2021-09-22T08:59:00.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"sr",
"dataset:wiki-sr",
"transformers",
"masked-lm",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | macedonizer | null | macedonizer/sr-roberta-base | 4 | null | transformers | 18,782 | ---
language:
- sr
thumbnail: https://huggingface.co/macedonizer/sr-roberta-base/lets-talk-about-nlp-sr.jpg
tags:
- masked-lm
license: apache-2.0
datasets:
- wiki-sr
---
# SR-RoBERTa base model
Pretrained model on Serbian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between скопје and Скопје.
# Model description
RoBERTa is a transformers model pre-trained on a large corpus of мацед data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
# Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2.
# How to use
You can use this model directly with a pipeline for masked language modeling: \
from transformers import pipeline \
unmasker = pipeline('fill-mask', model='macedonizer/sr-roberta-base') \
unmasker("Београд је <mask> град Србије.") \
[{'score': 0.7834128141403198,
'sequence': 'Београд је главни град Србије',
'token': 3087,
'token_str': ' главни'},
{'score': 0.15424974262714386,
'sequence': 'Београд је највећи град Србије',
'token': 3916,
'token_str': ' највећи'},
{'score': 0.0035441946238279343,
'sequence': 'Београд је најважнији град Србије',
'token': 18577,
'token_str': ' најважнији'},
{'score': 0.003132033161818981,
'sequence': 'Београд је велики град Србије',
'token': 2063,
'token_str': ' велики'},
{'score': 0.0030417360831052065,
'sequence': 'Београд је важан град Србије',
'token': 9463,
'token_str': ' важан'}]
Here is how to use this model to get the features of a given text in PyTorch:
from transformers import RobertaTokenizer, RobertaModel \
tokenizer = RobertaTokenizer.from_pretrained('macedonizer/mk-roberta-base') \
model = RobertaModel.from_pretrained('macedonizer/sr-roberta-base') \
text = "Replace me by any text you'd like." \
encoded_input = tokenizer(text, return_tensors='pt') \
output = model(**encoded_input) |
madlag/bert-base-uncased-squadv1-x1.96-f88.3-d27-hybrid-filled-opt-v1 | 4af4ba55aee2c86bdd66f45986cfa5a0cc39af4a | 2021-06-16T14:54:10.000Z | [
"pytorch",
"tf",
"bert",
"question-answering",
"en",
"dataset:squad",
"transformers",
"license:mit",
"autotrain_compatible"
] | question-answering | false | madlag | null | madlag/bert-base-uncased-squadv1-x1.96-f88.3-d27-hybrid-filled-opt-v1 | 4 | null | transformers | 18,783 | ---
language: en
thumbnail:
license: mit
tags:
- question-answering
-
-
datasets:
- squad
metrics:
- squad
widget:
- text: "Where is the Eiffel Tower located?"
context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower."
- text: "Who is Frederic Chopin?"
context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano."
---
## BERT-base uncased model fine-tuned on SQuAD v1
This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 27.0%** of the original weights.
This model **CANNOT be used without using nn_pruning `optimize_model`** function, as it uses NoNorms instead of LayerNorms and this is not currently supported by the Transformers library.
It uses ReLUs instead of GeLUs as in the initial BERT network, to speedup inference.
This does not need special handling, as it is supported by the Transformers library, and flagged in the model config by the ```"hidden_act": "relu"``` entry.
The model contains **43.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method).
With a simple resizing of the linear matrices it ran **1.96x as fast as bert-base-uncased** on the evaluation.
This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix.
<div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x1.96-f88.3-d27-hybrid-filled-opt-v1/raw/main/model_card/density_info.js" id="aa996a95-2c09-4974-ae46-778cf5b50833"></script></div>
In terms of accuracy, its **F1 is 88.33**, compared with 88.5 for bert-base-uncased, a **F1 drop of 0.17**.
## Fine-Pruning details
This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-base-uncased) checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad)
This model is case-insensitive: it does not make a difference between english and English.
A side-effect of the block pruning is that some of the attention heads are completely removed: 55 heads were removed on a total of 144 (38.2%).
Here is a detailed view on how the remaining heads are distributed in the network after pruning.
<div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x1.96-f88.3-d27-hybrid-filled-opt-v1/raw/main/model_card/pruning_info.js" id="d74872e0-a89c-4ce0-b0fa-1c5709b67cd9"></script></div>
## Details of the SQuAD1.1 dataset
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD1.1 | train | 90.6K |
| SQuAD1.1 | eval | 11.1k |
### Fine-tuning
- Python: `3.8.5`
- Machine specs:
```CPU: Intel(R) Core(TM) i7-6700K CPU
Memory: 64 GiB
GPUs: 1 GeForce GTX 3090, with 24GiB memory
GPU driver: 455.23.05, CUDA: 11.1
```
### Results
**Pytorch model file size**: `374MB` (original BERT: `420MB`)
| Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation |
| ------ | --------- | --------- | --------- |
| **EM** | **81.31** | **80.8** | **+0.51**|
| **F1** | **88.33** | **88.5** | **-0.17**|
## Example Usage
Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns.
`pip install nn_pruning`
Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded.
```python
from transformers import pipeline
from nn_pruning.inference_model_patcher import optimize_model
qa_pipeline = pipeline(
"question-answering",
model="madlag/bert-base-uncased-squadv1-x1.96-f88.3-d27-hybrid-filled-opt-v1",
tokenizer="madlag/bert-base-uncased-squadv1-x1.96-f88.3-d27-hybrid-filled-opt-v1"
)
print("bert-base-uncased parameters: 191.0M")
print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M")
qa_pipeline.model = optimize_model(qa_pipeline.model, "dense")
print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M")
predictions = qa_pipeline({
'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.",
'question': "Who is Frederic Chopin?",
})
print("Predictions", predictions)
``` |
mahaamami/distilroberta-base-model-transcript | 12bf8faac43fc003207fabcae72b29c6e8e5c500 | 2022-01-13T13:28:24.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | mahaamami | null | mahaamami/distilroberta-base-model-transcript | 4 | null | transformers | 18,784 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-model-transcript
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-model-transcript
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.1193 | 1.0 | 5570 | 1.9873 |
| 2.0502 | 2.0 | 11140 | 1.9304 |
| 1.9718 | 3.0 | 16710 | 1.8922 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
malay-huggingface/albert-large-bahasa-cased | 7edeea22528544bb9bfc780b4e8707647eff2952 | 2021-09-26T12:40:49.000Z | [
"pytorch",
"albert",
"fill-mask",
"ms",
"transformers",
"autotrain_compatible"
] | fill-mask | false | malay-huggingface | null | malay-huggingface/albert-large-bahasa-cased | 4 | null | transformers | 18,785 | ---
language: ms
---
# albert-large-bahasa-cased
Pretrained ALBERT large language model for Malay.
## Pretraining Corpus
`albert-large-bahasa-cased` model was pretrained on ~1.4 Billion words. Below is list of data we trained on,
1. [cleaned local texts](https://github.com/huseinzol05/malay-dataset/tree/master/dumping/clean).
2. [translated The Pile](https://github.com/huseinzol05/malay-dataset/tree/master/corpus/pile).
## Pretraining details
- All steps can reproduce from here, [Malaya/pretrained-model/albert](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/albert).
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AlbertTokenizer, AlbertModel
model = AlbertModel.from_pretrained('malay-huggingface/albert-large-bahasa-cased')
tokenizer = AlbertTokenizer.from_pretrained(
'malay-huggingface/albert-large-bahasa-cased',
do_lower_case = False,
)
```
## Example using AutoModelWithLMHead
```python
from transformers import AlbertTokenizer, AlbertForMaskedLM, pipeline
model = AlbertForMaskedLM.from_pretrained('malay-huggingface/albert-large-bahasa-cased')
tokenizer = AlbertTokenizer.from_pretrained(
'malay-huggingface/albert-large-bahasa-cased',
do_lower_case = False,
)
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
fill_mask('Permohonan Najib, anak untuk dengar isu perlembagaan [MASK] .')
```
Output is,
```text
[{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan Malaysia.',
'score': 0.09178723394870758,
'token': 1957,
'token_str': 'M a l a y s i a'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan negara.',
'score': 0.053524162620306015,
'token': 2134,
'token_str': 'n e g a r a'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan dikemukakan.',
'score': 0.031137527897953987,
'token': 9383,
'token_str': 'd i k e m u k a k a n'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan 1MDB.',
'score': 0.02826082520186901,
'token': 13838,
'token_str': '1 M D B'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan ditolak.',
'score': 0.026568090543150902,
'token': 11465,
'token_str': 'd i t o l a k'}]
```
|
malay-huggingface/bert-large-bahasa-cased | 702684329e92a5c7863a498cd28e4f07b41f1537 | 2021-09-11T16:10:26.000Z | [
"pytorch",
"bert",
"fill-mask",
"ms",
"transformers",
"autotrain_compatible"
] | fill-mask | false | malay-huggingface | null | malay-huggingface/bert-large-bahasa-cased | 4 | null | transformers | 18,786 | ---
language: ms
---
# bert-large-bahasa-cased
Pretrained BERT large language model for Malay.
## Pretraining Corpus
`bert-large-bahasa-cased` model was pretrained on ~1.4 Billion words. Below is list of data we trained on,
1. [cleaned local texts](https://github.com/huseinzol05/malay-dataset/tree/master/dumping/clean).
2. [translated The Pile](https://github.com/huseinzol05/malay-dataset/tree/master/corpus/pile).
## Pretraining details
- All steps can reproduce from here, [Malaya/pretrained-model/bert](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/bert).
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import BertTokenizer, BertModel
model = BertModel.from_pretrained('malay-huggingface/bert-large-bahasa-cased')
tokenizer = BertTokenizer.from_pretrained(
'malay-huggingface/bert-large-bahasa-cased',
do_lower_case = False,
)
```
## Example using AutoModelWithLMHead
```python
from transformers import BertTokenizer, BertForMaskedLM, pipeline
model = BertForMaskedLM.from_pretrained('malay-huggingface/bert-large-bahasa-cased')
tokenizer = BertTokenizer.from_pretrained(
'malay-huggingface/bert-large-bahasa-cased',
do_lower_case = False,
)
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
fill_mask('Permohonan Najib, anak untuk dengar isu perlembagaan [MASK] .')
```
Output is,
```text
[{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan Malaysia.',
'score': 0.09178723394870758,
'token': 1957,
'token_str': 'M a l a y s i a'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan negara.',
'score': 0.053524162620306015,
'token': 2134,
'token_str': 'n e g a r a'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan dikemukakan.',
'score': 0.031137527897953987,
'token': 9383,
'token_str': 'd i k e m u k a k a n'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan 1MDB.',
'score': 0.02826082520186901,
'token': 13838,
'token_str': '1 M D B'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan ditolak.',
'score': 0.026568090543150902,
'token': 11465,
'token_str': 'd i t o l a k'}]
```
|
mamlong34/t5_large_race_cosmos_qa | a1f6e5fd3ab689acb1da1e673996ee0571671b83 | 2021-10-22T15:58:00.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:race",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | mamlong34 | null | mamlong34/t5_large_race_cosmos_qa | 4 | null | transformers | 18,787 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- race
metrics:
- accuracy
model-index:
- name: t5_large_race_cosmos_qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_large_race_cosmos_qa
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the race dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4382
- Accuracy: 0.8023
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.3513 | 1.0 | 10983 | 0.7714 | 0.3165 |
| 0.2109 | 2.0 | 21966 | 0.7986 | 0.3329 |
| 0.0929 | 3.0 | 32949 | 0.4382 | 0.8023 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0
- Datasets 1.14.0
- Tokenizers 0.10.3
|
mamlong34/t5_small_race_mutlirc | f5cc3900b971694685e5ea42f6ffedca0ea60632 | 2021-10-10T12:12:47.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | mamlong34 | null | mamlong34/t5_small_race_mutlirc | 4 | null | transformers | 18,788 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: t5_small_race_mutlirc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_small_race_mutlirc
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5760
- Accuracy: 0.5259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.6043 | 1.0 | 14141 | 0.4832 | 0.5925 |
| 0.5647 | 2.0 | 28282 | 0.5152 | 0.5659 |
| 0.5237 | 3.0 | 42423 | 0.5760 | 0.5259 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
manishiitg/distilbart-xsum-12-6-recruit-qa | 0ddd63c59120fbe3624d2a10c17251ac2307bbe3 | 2020-11-02T11:30:29.000Z | [
"pytorch",
"bart",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | manishiitg | null | manishiitg/distilbart-xsum-12-6-recruit-qa | 4 | null | transformers | 18,789 | Entry not found |
manueldeprada/t5-cord19 | c8d0776b17f0569181b1c8f01134ae4a88559b75 | 2021-04-25T23:12:15.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | manueldeprada | null | manueldeprada/t5-cord19 | 4 | null | transformers | 18,790 | # T5-base pretrained on CORD-19 dataset
The model has been pretrained on text and abstracts from the CORD-19 dataset, using a manually implemented denoising objetive similar to the original T5 denoising objective.
Model needs to be finetuned on downstream tasks.
Code avaliable in github: [https://github.com/manueldeprada/Pretraining-T5-PyTorch-Lightning](https://github.com/manueldeprada/Pretraining-T5-PyTorch-Lightning). |
maple/xlm-roberta-large | f0e767b44ffae83f9774a9995ecd4f209c478d33 | 2022-01-03T11:22:56.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | maple | null | maple/xlm-roberta-large | 4 | null | transformers | 18,791 | Entry not found |
marciovbarbosa/t5-small-finetuned-de-to-en-lr3e-4 | 383565855d66320c8e16032cde2eb26e85836c6e | 2021-12-04T03:33:12.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | marciovbarbosa | null | marciovbarbosa/t5-small-finetuned-de-to-en-lr3e-4 | 4 | null | transformers | 18,792 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-to-en-lr3e-4
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: de-en
metrics:
- name: Bleu
type: bleu
value: 11.9094
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-to-en-lr3e-4
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9059
- Bleu: 11.9094
- Gen Len: 17.2257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 272 | 1.8814 | 10.3468 | 17.2244 |
| 2.2309 | 2.0 | 544 | 1.8320 | 10.9949 | 17.2768 |
| 2.2309 | 3.0 | 816 | 1.8273 | 11.4299 | 17.2147 |
| 1.7515 | 4.0 | 1088 | 1.8321 | 11.5576 | 17.3191 |
| 1.7515 | 5.0 | 1360 | 1.8377 | 11.8255 | 17.2244 |
| 1.488 | 6.0 | 1632 | 1.8562 | 11.6741 | 17.2427 |
| 1.488 | 7.0 | 1904 | 1.8653 | 11.7363 | 17.2331 |
| 1.3301 | 8.0 | 2176 | 1.8938 | 12.0458 | 17.2044 |
| 1.3301 | 9.0 | 2448 | 1.9005 | 11.8676 | 17.2437 |
| 1.2241 | 10.0 | 2720 | 1.9059 | 11.9094 | 17.2257 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
marcolatella/hate_trained | 5f57d6f9f8f2c0429a73d034f615481f59997cb6 | 2021-12-11T00:02:24.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | marcolatella | null | marcolatella/hate_trained | 4 | null | transformers | 18,793 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: hate_trained
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: hate
metrics:
- name: F1
type: f1
value: 0.7875737774565976
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hate_trained
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8182
- F1: 0.7876
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.7272339744854407e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4635 | 1.0 | 563 | 0.4997 | 0.7530 |
| 0.3287 | 2.0 | 1126 | 0.5138 | 0.7880 |
| 0.216 | 3.0 | 1689 | 0.6598 | 0.7821 |
| 0.1309 | 4.0 | 2252 | 0.8182 | 0.7876 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
maroo93/squad1.1_1 | 161192a4cec4efb749356c5239815da2f365b523 | 2021-05-19T23:08:41.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | maroo93 | null | maroo93/squad1.1_1 | 4 | null | transformers | 18,794 | Entry not found |
masapasa/xls-r-300m-it-cv8-ds13 | d943e85baf67bdce02dea2b33a0a151da2338473 | 2022-03-23T18:35:02.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"it",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | masapasa | null | masapasa/xls-r-300m-it-cv8-ds13 | 4 | 1 | transformers | 18,795 | ---
language:
- it
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: ''
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: it
metrics:
- name: Test WER
type: wer
value: 100.0
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: it
metrics:
- name: Test WER
type: wer
value: 100.0
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: it
metrics:
- name: Test WER
type: wer
value: 100.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3549
- Wer: 0.3827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4129 | 5.49 | 500 | 3.3224 | 1.0 |
| 2.9323 | 10.98 | 1000 | 2.9128 | 1.0000 |
| 1.6839 | 16.48 | 1500 | 0.7740 | 0.6854 |
| 1.485 | 21.97 | 2000 | 0.5830 | 0.5976 |
| 1.362 | 27.47 | 2500 | 0.4866 | 0.4905 |
| 1.2752 | 32.96 | 3000 | 0.4240 | 0.4967 |
| 1.1957 | 38.46 | 3500 | 0.3899 | 0.4258 |
| 1.1646 | 43.95 | 4000 | 0.3597 | 0.4014 |
| 1.1265 | 49.45 | 4500 | 0.3559 | 0.3829 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
maximedb/mqa-cross-encoder | 3e002fbebe222059311aaa68050570004ad81fb0 | 2021-11-18T16:33:52.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | maximedb | null | maximedb/mqa-cross-encoder | 4 | null | transformers | 18,796 | hello
|
maximedb/polyfaq_cross | 8b6f3ff2f19b3f9158f1af7e1c43b3504de8fd8b | 2022-01-17T18:32:14.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | maximedb | null | maximedb/polyfaq_cross | 4 | null | transformers | 18,797 | Entry not found |
mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-naija | 0ee35960db9c197d9f47c751594386076a878553 | 2021-11-25T09:04:20.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"pcm",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
] | token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-naija | 4 | null | transformers | 18,798 | ---
language:
- pcm
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Mixed Martial Arts joinbodi , Ultimate Fighting Championship , UFC don decide say dem go enta back di octagon on Saturday , 9 May , for Jacksonville , Florida ."
---
# xlm-roberta-base-finetuned-naija-finetuned-ner-naija
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-naija](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Nigerian Pidgin part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-naija-finetuned-ner-naija](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-naija) (This model) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | pcm | 88.06 | 87.04 | 89.12 | 90.00 | 88.00 | 81.00 | 92.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-naija](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-naija) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | pcm | 89.12 | 87.84 | 90.42 | 90.00 | 89.00 | 82.00 | 94.00 |
| [xlm-roberta-base-finetuned-ner-naija](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-naija) | [base](https://huggingface.co/xlm-roberta-base) | pcm | 88.89 | 88.13 | 89.66 | 92.00 | 87.00 | 82.00 | 94.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-naija'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Mixed Martial Arts joinbodi , Ultimate Fighting Championship , UFC don decide say dem go enta back di octagon on Saturday , 9 May , for Jacksonville , Florida ."
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-ner-hausa | 112c9abe8884dfb53264282776d76fe652cd5fe8 | 2021-11-25T09:04:25.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"ha",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
] | token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-ner-hausa | 4 | null | transformers | 18,799 | ---
language:
- ha
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "A saurari cikakken rahoton wakilin Muryar Amurka Ibrahim Abdul'aziz"
---
# xlm-roberta-base-finetuned-ner-hausa
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Hausa part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-ner-hausa](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-hausa) (This model) | [base](https://huggingface.co/xlm-roberta-base) | hau | 89.94 | 87.74 | 92.25 | 84.00 | 94.00 | 74.00 | 93.00 |
| [xlm-roberta-base-finetuned-hausa-finetuned-ner-hausa](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-hausa) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | hau | 92.27 | 90.46 | 94.16 | 85.00 | 95.00 | 80.00 | 97.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-hausa](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-hausa) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | hau | 89.14 | 87.18 | 91.20 | 82.00 | 93.00 | 76.00 | 93.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-hausa'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "A saurari cikakken rahoton wakilin Muryar Amurka Ibrahim Abdul'aziz"
ner_results = nlp(example)
print(ner_results)
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.