modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-04 12:29:36
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 468
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-04 12:29:27
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ielab/unicoil-tilde128-msmarco-passage | ielab | 2021-10-31T13:57:26Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | uniCOIL trained with passages expand with TILDE (m=128) |
ielab/TILDEv2-TILDE128-exp | ielab | 2021-10-31T13:51:09Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | TILDEv2 trained with passages expand with TILDE (m=128) |
ielab/TILDEv2-TILDE200-exp | ielab | 2021-10-31T13:50:55Z | 26 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | TILDEv2 trained with passages expand with TILDE (m=200) |
ielab/unicoil-tilde200-msmarco-passage | ielab | 2021-10-31T13:50:01Z | 20 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | uniCOIL trained with passages expand with TILDE (m=200) |
huggingtweets/harbogomps | huggingtweets | 2021-10-30T21:14:54Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/harbogomps/1635628393154/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1064019238279495680/-EPf-JLO_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🧛 Harbo Chomps 🧛</div>
<div style="text-align: center; font-size: 14px;">@harbogomps</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🧛 Harbo Chomps 🧛.
| Data | 🧛 Harbo Chomps 🧛 |
| --- | --- |
| Tweets downloaded | 515 |
| Retweets | 189 |
| Short tweets | 92 |
| Tweets kept | 234 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ao36t1el/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @harbogomps's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3b5rtb6c) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3b5rtb6c/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/harbogomps')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingartists/linkin-park | huggingartists | 2021-10-30T14:56:26Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/linkin-park",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/linkin-park
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/a865aac7693c39977b9b402dc364908e.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Linkin Park</div>
<a href="https://genius.com/artists/linkin-park">
<div style="text-align: center; font-size: 14px;">@linkin-park</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Linkin Park.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/linkin-park).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/linkin-park")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3mtr0u4z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Linkin Park's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/fxn4brd6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/fxn4brd6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/linkin-park')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/linkin-park")
model = AutoModelWithLMHead.from_pretrained("huggingartists/linkin-park")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingtweets/phaggotthefrog | huggingtweets | 2021-10-30T10:52:42Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/phaggotthefrog/1635591158850/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1444194494430081025/FVUA149U_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Anti-Soap Frög 🐀</div>
<div style="text-align: center; font-size: 14px;">@phaggotthefrog</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Anti-Soap Frög 🐀.
| Data | Anti-Soap Frög 🐀 |
| --- | --- |
| Tweets downloaded | 3226 |
| Retweets | 629 |
| Short tweets | 738 |
| Tweets kept | 1859 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3el8bjuf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @phaggotthefrog's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2qjb6app) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2qjb6app/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/phaggotthefrog')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/rufandom | huggingtweets | 2021-10-30T09:37:07Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/rufandom/1635586623585/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1375014984799944705/bcaZBnKn_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Грейс| Мультифандом✨</div>
<div style="text-align: center; font-size: 14px;">@rufandom</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Грейс| Мультифандом✨.
| Data | Грейс| Мультифандом✨ |
| --- | --- |
| Tweets downloaded | 977 |
| Retweets | 549 |
| Short tweets | 15 |
| Tweets kept | 413 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wthxx9x/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rufandom's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/10tid4s1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/10tid4s1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rufandom')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
celtics1863/env-bert-cls-chinese | celtics1863 | 2021-10-30T09:27:10Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"environment",
"multi-class",
"classification",
"zh",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- zh
tags:
- bert
- pytorch
- environment
- multi-class
- classification
---
中文环境文本分类模型,1.6M的数据集,在env-bert-chinese上进行fine-tuning。
分为环境影响评价与控制、碳排放控制、水污染控制、大气污染控制、土壤污染控制、环境生态、固体废物、环境毒理与健康、环境微生物、环境政策与经济10类。
项目正在进行中,后续会陆续更新相关内容。
清华大学环境学院课题组
有相关需求、建议,联系[email protected] |
adam3242/test | adam3242 | 2021-10-30T08:31:53Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
title: Twitter Sentiments
emoji: 😍
colorFrom: yellow
colorTo: blue
sdk: streamlit
app_file: app.py
pinned: false
---
# Configuration
`title`: _string_
Display title for the Space
`emoji`: _string_
Space emoji (emoji-only character allowed)
`colorFrom`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`colorTo`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`sdk`: _string_
Can be either `gradio` or `streamlit`
`app_file`: _string_
Path to your main application file (which contains either `gradio` or `streamlit` Python code).
Path is relative to the root of the repository.
`pinned`: _boolean_
Whether the Space stays on top of your list.
|
huggingtweets/elonmusk-kanyewest | huggingtweets | 2021-10-29T17:29:10Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/elonmusk-kanyewest/1635528546431/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1442634650703237120/mXIcYtIs_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1276461929934942210/cqNhNk6v_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & ye</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-kanyewest</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & ye.
| Data | Elon Musk | ye |
| --- | --- | --- |
| Tweets downloaded | 3249 | 1856 |
| Retweets | 185 | 186 |
| Short tweets | 853 | 573 |
| Tweets kept | 2211 | 1097 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ceinvzc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-kanyewest's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/16csk8qn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/16csk8qn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-kanyewest')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/incharmuese-sadsocrates-vvangone | huggingtweets | 2021-10-29T15:35:31Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/incharmuese-sadsocrates-vvangone/1635521727120/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/581592941124153346/5nfUJyU2_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/561419401145376768/7OIwxUCC_400x400.jpeg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1190256978007904257/TsXH7_nP_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Charmeuse & Sad Socrates & Vincent Van Gone</div>
<div style="text-align: center; font-size: 14px;">@incharmuese-sadsocrates-vvangone</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Charmeuse & Sad Socrates & Vincent Van Gone.
| Data | Charmeuse | Sad Socrates | Vincent Van Gone |
| --- | --- | --- | --- |
| Tweets downloaded | 3238 | 3197 | 3233 |
| Retweets | 1165 | 40 | 1054 |
| Short tweets | 248 | 161 | 266 |
| Tweets kept | 1825 | 2996 | 1913 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/13ochftk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @incharmuese-sadsocrates-vvangone's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/173sb7ob) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/173sb7ob/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/incharmuese-sadsocrates-vvangone')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/cnn-elonmusk-kanyewest | huggingtweets | 2021-10-29T15:21:45Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1276461929934942210/cqNhNk6v_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1442634650703237120/mXIcYtIs_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1278259160644227073/MfCyF7CG_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ye & Elon Musk & CNN</div>
<div style="text-align: center; font-size: 14px;">@cnn-elonmusk-kanyewest</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ye & Elon Musk & CNN.
| Data | ye | Elon Musk | CNN |
| --- | --- | --- | --- |
| Tweets downloaded | 1856 | 3250 | 3250 |
| Retweets | 186 | 186 | 104 |
| Short tweets | 573 | 853 | 18 |
| Tweets kept | 1097 | 2211 | 3128 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ehxjxud/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cnn-elonmusk-kanyewest's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1dcouz7e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1dcouz7e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cnn-elonmusk-kanyewest')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/yierpaen | huggingtweets | 2021-10-29T14:00:32Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/yierpaen/1635516027908/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1428772517347479552/fT9QUaOy_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Erpan Pardon</div>
<div style="text-align: center; font-size: 14px;">@yierpaen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Erpan Pardon.
| Data | Erpan Pardon |
| --- | --- |
| Tweets downloaded | 3025 |
| Retweets | 2613 |
| Short tweets | 106 |
| Tweets kept | 306 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2jk3rfqi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yierpaen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/y2mm5kxj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/y2mm5kxj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/yierpaen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
furyhawk/t5-small-finetuned-bbc | furyhawk | 2021-10-29T11:01:51Z | 7 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-bbc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-bbc
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3238
- Rouge1: 21.2266
- Rouge2: 16.0927
- Rougel: 19.6785
- Rougelsum: 19.8849
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.4882 | 1.0 | 1001 | 0.3238 | 21.2266 | 16.0927 | 19.6785 | 19.8849 | 19.0 |
### Framework versions
- Transformers 4.12.0
- Pytorch 1.10.0
- Datasets 1.14.0
- Tokenizers 0.10.3
|
shiqing/opus-mt-en-zh-finetuned-en-to-zh | shiqing | 2021-10-29T08:38:40Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: opus-mt-en-zh-finetuned-en-to-zh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-zh-finetuned-en-to-zh
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| No log | 1.0 | 10 | 4.0166 | 1.3628 | 416.6867 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cpu
- Datasets 1.14.0
- Tokenizers 0.10.3
|
classla/bcms-bertic | classla | 2021-10-29T08:20:06Z | 1,597 | 15 | transformers | [
"transformers",
"pytorch",
"electra",
"pretraining",
"hr",
"bs",
"sr",
"cnr",
"hbs",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language:
- hr
- bs
- sr
- cnr
- hbs
license: apache-2.0
---
# BERTić* [bert-ich] /bɜrtitʃ/ - A transformer language model for Bosnian, Croatian, Montenegrin and Serbian
* The name should resemble the facts (1) that the model was trained in Zagreb, Croatia, where diminutives ending in -ić (as in fotić, smajlić, hengić etc.) are very popular, and (2) that most surnames in the countries where these languages are spoken end in -ić (with diminutive etymology as well).
This Electra model was trained on more than 8 billion tokens of Bosnian, Croatian, Montenegrin and Serbian text.
***new*** We have published a version of this model fine-tuned on the named entity recognition task ([bcms-bertic-ner](https://huggingface.co/classla/bcms-bertic-ner)) and on the hate speech detection task ([bcms-bertic-frenk-hate](https://huggingface.co/classla/bcms-bertic-frenk-hate)).
If you use the model, please cite the following paper:
```
@inproceedings{ljubesic-lauc-2021-bertic,
title = "{BERT}i{\'c} - The Transformer Language Model for {B}osnian, {C}roatian, {M}ontenegrin and {S}erbian",
author = "Ljube{\v{s}}i{\'c}, Nikola and Lauc, Davor",
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bsnlp-1.5",
pages = "37--42",
}
```
## Benchmarking
Comparing this model to [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) and [CroSloEngual BERT](https://huggingface.co/EMBEDDIA/crosloengual-bert) on the tasks of (1) part-of-speech tagging, (2) named entity recognition, (3) geolocation prediction, and (4) commonsense causal reasoning, shows the BERTić model to be superior to the other two.
### Part-of-speech tagging
Evaluation metric is (seqeval) microF1. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (* p<=0.05, ** p<=0.01, *** p<=0.001, ***** p<=0.0001).
Dataset | Language | Variety | CLASSLA | mBERT | cseBERT | BERTić
---|---|---|---|---|---|---
hr500k | Croatian | standard | 93.87 | 94.60 | 95.74 | **95.81*****
reldi-hr | Croatian | internet non-standard | - | 88.87 | 91.63 | **92.28*****
SETimes.SR | Serbian | standard | 95.00 | 95.50 | **96.41** | 96.31
reldi-sr | Serbian | internet non-standard | - | 91.26 | 93.54 | **93.90*****
### Named entity recognition
Evaluation metric is (seqeval) microF1. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (* p<=0.05, ** p<=0.01, *** p<=0.001, ***** p<=0.0001).
Dataset | Language | Variety | CLASSLA | mBERT | cseBERT | BERTić
---|---|---|---|---|---|---
hr500k | Croatian | standard | 80.13 | 85.67 | 88.98 | **89.21******
reldi-hr | Croatian | internet non-standard | - | 76.06 | 81.38 | **83.05******
SETimes.SR | Serbian | standard | 84.64 | **92.41** | 92.28 | 92.02
reldi-sr | Serbian | internet non-standard | - | 81.29 | 82.76 | **87.92******
### Geolocation prediction
The dataset comes from the VarDial 2020 evaluation campaign's shared task on [Social Media variety Geolocation prediction](https://sites.google.com/view/vardial2020/evaluation-campaign). The task is to predict the latitude and longitude of a tweet given its text.
Evaluation metrics are median and mean of distance between gold and predicted geolocations (lower is better). No statistical significance is computed due to large test set (39,723 instances). Centroid baseline predicts each text to be created in the centroid of the training dataset.
System | Median | Mean
---|---|---
centroid | 107.10 | 145.72
mBERT | 42.25 | 82.05
cseBERT | 40.76 | 81.88
BERTić | **37.96** | **79.30**
### Choice Of Plausible Alternatives
The dataset is a translation of the [COPA dataset](https://people.ict.usc.edu/~gordon/copa.html) into Croatian ([link to the dataset](http://hdl.handle.net/11356/1404)).
Evaluation metric is accuracy. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (* p<=0.05, ** p<=0.01, *** p<=0.001, ***** p<=0.0001).
System | Accuracy
---|---
random | 50.00
mBERT | 54.12
cseBERT | 61.80
BERTić | **65.76****
|
vijayv500/DialoGPT-small-Big-Bang-Theory-Series-Transcripts | vijayv500 | 2021-10-29T07:39:27Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
tags:
- conversational
license: mit
---
## I fine-tuned DialoGPT-small model on "The Big Bang Theory" TV Series dataset from Kaggle (https://www.kaggle.com/mitramir5/the-big-bang-theory-series-transcript)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("vijayv500/DialoGPT-small-Big-Bang-Theory-Series-Transcripts")
model = AutoModelForCausalLM.from_pretrained("vijayv500/DialoGPT-small-Big-Bang-Theory-Series-Transcripts")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature = 0.8
)
# pretty print last ouput tokens from bot
print("TBBT Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
aguilara42/openl3-labeler-w-timestamps | aguilara42 | 2021-10-29T01:38:54Z | 0 | 1 | null | [
"audacity",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- audacity
inference: false
---
# Labeler With Timestamps
## Being used for the `Audio Labeler` effect in Audacity
This is a audio labeler model which is used in Audacity's labeler effect.
metadata:
```
{
"sample_rate": 48000,
"domain_tags": ["Music"],
"tags": ["Audio Labeler"],
"effect_type": "waveform-to-labels",
"multichannel": false,
"labels": ["Acoustic Guitar", "Auxiliary Percussion", "Brass", "Clean Electric Guitar", "Distorted Electric Guitar", "Double Bass", "Drum Set", "Electric Bass", "Flute", "piano", "Reeds", "Saxophone", "Strings", "Trumpet", "Voice"],
"short_description": "Use me to label some instruments!",
"long_description": "An audio labeler, which outputs label predictions and time ranges for the labels. This model can label various instruments listed in the labels section."
}
``` |
bochaowei/t5-small-finetuned-cnn-wei1 | bochaowei | 2021-10-28T20:24:24Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnn-wei1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 41.1796
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnn-wei1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6819
- Rouge1: 41.1796
- Rouge2: 18.9426
- Rougel: 29.2338
- Rougelsum: 38.4087
- Gen Len: 72.7607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8582 | 1.0 | 23927 | 1.6819 | 41.1796 | 18.9426 | 29.2338 | 38.4087 | 72.7607 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
sparki/kinkyfurs-gpt2 | sparki | 2021-10-28T16:26:08Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
license: mit
---
Import it using pipeline
from transformers import pipeline
text_generation = pipeline('text-generation' , model='sparki/kinkyfurs-gpt2')
Then use it
prefix_text = input()
text_generation(prefix_text, max_length=50, num_beams=5,no_repeat_ngram_size=2,early_stopping=True)
|
patrickvonplaten/sew-d-small-100k-ft-timit-2 | patrickvonplaten | 2021-10-28T15:51:49Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"sew-d",
"automatic-speech-recognition",
"timit_asr",
"generated_from_trainer",
"dataset:timit_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: sew-d-small-100k-ft-timit-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-d-small-100k-ft-timit-2
This model is a fine-tuned version of [asapp/sew-d-small-100k](https://huggingface.co/asapp/sew-d-small-100k) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7357
- Wer: 0.7935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.1554 | 0.69 | 100 | 4.0531 | 1.0 |
| 2.9584 | 1.38 | 200 | 2.9775 | 1.0 |
| 2.9355 | 2.07 | 300 | 2.9412 | 1.0 |
| 2.9048 | 2.76 | 400 | 2.9143 | 1.0 |
| 2.8568 | 3.45 | 500 | 2.8786 | 1.0 |
| 2.7248 | 4.14 | 600 | 2.7553 | 0.9833 |
| 2.6124 | 4.83 | 700 | 2.5874 | 1.0511 |
| 2.5463 | 5.52 | 800 | 2.4630 | 1.0883 |
| 2.3302 | 6.21 | 900 | 2.3948 | 1.0651 |
| 2.0669 | 6.9 | 1000 | 2.2228 | 0.9920 |
| 2.1991 | 7.59 | 1100 | 2.0815 | 0.9185 |
| 2.293 | 8.28 | 1200 | 2.0229 | 0.8674 |
| 2.0366 | 8.97 | 1300 | 1.9590 | 0.9165 |
| 1.767 | 9.66 | 1400 | 1.9129 | 0.8125 |
| 1.6222 | 10.34 | 1500 | 1.8868 | 0.8259 |
| 2.173 | 11.03 | 1600 | 1.8691 | 0.8661 |
| 1.8614 | 11.72 | 1700 | 1.8388 | 0.8250 |
| 1.5928 | 12.41 | 1800 | 1.8528 | 0.7772 |
| 1.5978 | 13.1 | 1900 | 1.8002 | 0.7892 |
| 1.9886 | 13.79 | 2000 | 1.7848 | 0.8448 |
| 1.8042 | 14.48 | 2100 | 1.7819 | 0.8156 |
| 1.5488 | 15.17 | 2200 | 1.7615 | 0.8228 |
| 1.4468 | 15.86 | 2300 | 1.7565 | 0.7946 |
| 1.8153 | 16.55 | 2400 | 1.7537 | 0.8341 |
| 1.77 | 17.24 | 2500 | 1.7527 | 0.7958 |
| 1.4742 | 17.93 | 2600 | 1.7592 | 0.7850 |
| 1.4088 | 18.62 | 2700 | 1.7421 | 0.8149 |
| 1.7066 | 19.31 | 2800 | 1.7382 | 0.7977 |
| 1.7068 | 20.0 | 2900 | 1.7357 | 0.7935 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
furyhawk/t5-base-finetuned-bbc-headline | furyhawk | 2021-10-28T15:44:15Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-finetuned-bbc-headline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-bbc-headline
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 167 | 2.2978 | 31.8313 | 10.3824 | 29.6182 | 29.4336 | 10.3153 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
patrickvonplaten/sew-d-small-100k-ft-timit | patrickvonplaten | 2021-10-28T15:26:02Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"sew-d",
"automatic-speech-recognition",
"timit_asr",
"generated_from_trainer",
"dataset:timit_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: sew-d-small-100k-ft-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-d-small-100k-ft-timit
This model is a fine-tuned version of [asapp/sew-d-small-100k](https://huggingface.co/asapp/sew-d-small-100k) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7482
- Wer: 0.7987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2068 | 0.69 | 100 | 4.0802 | 1.0 |
| 2.9805 | 1.38 | 200 | 2.9792 | 1.0 |
| 2.9781 | 2.07 | 300 | 2.9408 | 1.0 |
| 2.9655 | 2.76 | 400 | 2.9143 | 1.0 |
| 2.8953 | 3.45 | 500 | 2.8775 | 1.0 |
| 2.7719 | 4.14 | 600 | 2.7815 | 0.9999 |
| 2.6531 | 4.83 | 700 | 2.6375 | 1.0065 |
| 2.6425 | 5.52 | 800 | 2.5602 | 1.0210 |
| 2.3963 | 6.21 | 900 | 2.4665 | 1.0591 |
| 2.1447 | 6.9 | 1000 | 2.2792 | 0.9848 |
| 2.2719 | 7.59 | 1100 | 2.2237 | 0.9465 |
| 2.3629 | 8.28 | 1200 | 2.1058 | 0.8907 |
| 2.0913 | 8.97 | 1300 | 2.0113 | 0.9070 |
| 1.8334 | 9.66 | 1400 | 1.9466 | 0.8177 |
| 1.6608 | 10.34 | 1500 | 1.9217 | 0.8698 |
| 2.2194 | 11.03 | 1600 | 1.9091 | 0.8727 |
| 1.9002 | 11.72 | 1700 | 1.8746 | 0.8332 |
| 1.6268 | 12.41 | 1800 | 1.8782 | 0.7951 |
| 1.6455 | 13.1 | 1900 | 1.8230 | 0.8225 |
| 2.0308 | 13.79 | 2000 | 1.8067 | 0.8560 |
| 1.855 | 14.48 | 2100 | 1.8129 | 0.8177 |
| 1.5901 | 15.17 | 2200 | 1.7891 | 0.8367 |
| 1.4848 | 15.86 | 2300 | 1.7821 | 0.8201 |
| 1.8754 | 16.55 | 2400 | 1.7700 | 0.8137 |
| 1.7975 | 17.24 | 2500 | 1.7795 | 0.8171 |
| 1.5194 | 17.93 | 2600 | 1.7605 | 0.7977 |
| 1.4374 | 18.62 | 2700 | 1.7529 | 0.7978 |
| 1.7498 | 19.31 | 2800 | 1.7522 | 0.8023 |
| 1.7452 | 20.0 | 2900 | 1.7482 | 0.7987 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
asapp/sew-d-small-100k | asapp | 2021-10-28T14:05:24Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"sew-d",
"feature-extraction",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-small
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
asapp/sew-d-base-plus-100k | asapp | 2021-10-28T13:48:40Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"sew-d",
"feature-extraction",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-base+
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
SajjadAyoubi/distil-bigbird-fa-zwnj | SajjadAyoubi | 2021-10-28T13:14:34Z | 83 | 0 | transformers | [
"transformers",
"pytorch",
"big_bird",
"fill-mask",
"arxiv:1810.04805",
"arxiv:2005.12515",
"arxiv:2007.14062",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | <span align="center">
<a href="https://huggingface.co/SajjadAyoubi/"><img src="https://img.shields.io/static/v1?label=%F0%9F%A4%97%20Hugging%20Face&message=SajjadAyoubi&color=yellow"></a>
<a href="https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/Demo.ipynb"><img src="https://img.shields.io/static/v1?label=Colab&message=Fine-tuning Example&logo=Google%20Colab&color=f9ab00"></a>
</span>
# ParsBigBird: Persian Bert For **Long-Range** Sequences
The [Bert](https://arxiv.org/abs/1810.04805) and [ParsBert](https://arxiv.org/abs/2005.12515) algorithms can handle texts with token lengths of up to 512, however, many tasks such as summarizing and answering questions require longer texts. In our work, we have trained the [BigBird](https://arxiv.org/abs/2007.14062) model for the Persian language to process texts up to 4096 in the Farsi (Persian) language using sparse attention.
## Evaluation: 🌡️
We have evaluated the model on three tasks with different sequence lengths
| Name | Params | SnappFood (F1) | Digikala Magazine(F1) | PersianQA (F1) |
| :--------------------------------------------------------------: | :----: | :-----------------: | :---------------: | :--------------: |
| [distil-bigbird-fa-zwnj](https://github.com/sajjjadayobi/ParsBigBird) | 78M | 85.43% | **94.05%** | **73.34%** |
| [bert-base-fa](https://github.com/hooshvare/parsbert) | 118M | **87.98%** | 93.65% | 70.06% |
- Despite being as big as distill-bert, the model performs equally well as ParsBert and is much better on PersianQA which requires much more context
- This evaluation was based on `max_lentgh=2048` (It can be changed up to 4096)
## How to use❓
### As Contextualized Word Embedding
```python
from transformers import BigBirdModel, AutoTokenizer
MODEL_NAME = "SajjadAyoubi/distil-bigbird-fa-zwnj"
# by default its in `block_sparse` block_size=32
model = BigBirdModel.from_pretrained(MODEL_NAME, block_size=32)
# you can use full attention like the following: use this when input isn't longer than 512
model = BigBirdModel.from_pretrained(MODEL_NAME, attention_type="original_full")
text = "😃 امیدوارم مدل بدردبخوری باشه چون خیلی طول کشید تا ترین بشه"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
tokens = tokenizer(text, return_tensors='pt')
output = model(**tokens) # contextualized embedding
```
### As Fill Blank
```python
from transformers import pipeline
MODEL_NAME = 'SajjadAyoubi/distil-bigbird-fa-zwnj'
fill = pipeline('fill-mask', model=MODEL_NAME, tokenizer=MODEL_NAME)
results = fill('تهران پایتخت [MASK] است.')
print(results[0]['token_str'])
>>> 'ایران'
```
## Pretraining details: 🔭
This model was pretrained using a masked language model (MLM) objective on the Persian section of the Oscar dataset. Following the original BERT training, 15% of tokens were masked. This was first described in this [paper](https://arxiv.org/abs/2007.14062) and released in this [repository](https://github.com/google-research/bigbird). Documents longer than 4096 were split into multiple documents, while documents much smaller than 4096 were merged using the [SEP] token. Model is warm started from `distilbert-fa`’s [checkpoint](https://huggingface.co/HooshvareLab/distilbert-fa-zwnj-base).
- For more details, you can take a look at config.json at the model card in 🤗 Model Hub
## Fine Tuning Recommendations: 🐤
Due to the model's memory requirements, `gradient_checkpointing` and `gradient_accumulation` should be used to maintain a reasonable batch size. Considering this model isn't really big, it's a good idea to first fine-tune it on your dataset using Masked LM objective (also called intermediate fine-tuning) before implementing the main task. In block_sparse mode, it doesn't matter how many tokens are input. It just attends to 256 tokens. Furthermore, original_full should be used up to 512 sequence lengths (instead of block sparse).
### Fine Tuning Examples 👷♂️👷♀️
| Dataset | Fine Tuning Example |
| ------------------------------------- | ------------------------------------------------------------ |
| Digikala Magazine Text Classification | <a href="https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/Demo.ipynb"><img src="https://img.shields.io/static/v1?label=Colab&message=Fine-tuning Example&logo=Google%20Colab&color=f9ab00"></a> |
## Contact us: 🤝
If you have a technical question regarding the model, pretraining, code or publication, please create an issue in the repository. This is the fastest way to reach us.
## Citation: ↩️
we didn't publish any papers on the work. However, if you did, please cite us properly with an entry like one below.
```bibtex
@misc{ParsBigBird,
author = {Ayoubi, Sajjad},
title = {ParsBigBird: Persian Bert For Long-Range Sequences},
year = 2021,
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/SajjjadAyobi/ParsBigBird}},
}
```
|
Narrativaai/fake-news-detection-spanish | Narrativaai | 2021-10-28T11:03:28Z | 26 | 11 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"fake",
"news",
"competition",
"es",
"dataset:fakedes",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language: es
tags:
- generated_from_trainer
- fake
- news
- competition
datasets:
- fakedes
widget:
- text: 'La palabra "haiga", aceptada por la RAE [SEP] La palabra "haiga", aceptada por la RAE La Real Academia de la Lengua (RAE), ha aceptado el uso de "HAIGA", para su utilización en las tres personas del singular del presente del subjuntivo del verbo hacer, aunque asegura que la forma más recomendable en la lengua culta para este tiempo, sigue siendo "haya".
Así lo han confirmado fuentes de la RAE, que explican que este cambio ha sido propuesto y aprobado por el pleno de la Academia de la Lengua, tras la extendida utilización por todo el territorio nacional, sobre todo, empleado por personas carentes de estudios o con estudios básicos de graduado escolar. Ya no será objeto de burla ese compañero que a diario repite aquello de "Mientras que haiga faena, no podemos quejarnos" o esa abuela que repite aquello de "El que haiga sacao los juguetes, que los recoja".
Entre otras palabras novedosas que ha aceptado la RAE, contamos también con "Descambiar", significa deshacer un cambio, por ejemplo "devolver la compra". Visto lo visto, nadie apostaría que la palabra "follamigos" sea la siguiente de la lista.'
metrics:
- f1
- accuracy
model-index:
- name: roberta-large-fake-news-detection-spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBERTa-large-fake-news-detection-spanish
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) on an [Spanish Fake News Dataset](https://sites.google.com/view/iberlef2020/#h.p_w0c31bn0r-SW).
It achieves the following results on the evaluation set:
- Loss: 1.7474
- F1: **0.7717**
- Accuracy: 0.7797
> So, based on the [leaderboard](https://sites.google.com/view/fakedes/results?authuser=0) our model **outperforms** the best model (scores F1 = 0.7666).
## Model description
RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019.
## Intended uses & limitations
The objective of this task is to decide if a news item is fake or real by analyzing its textual representation.
## Training and evaluation data
**FakeDeS**: [Fake News Detection in Spanish Shared Task](https://sites.google.com/view/fakedes/home)
Fake news provides information that aims to manipulate people for different purposes: terrorism, political elections, advertisement, satire, among others. In social networks, misinformation extends in seconds among thousands of people, so it is necessary to develop tools that help control the amount of false information on the web. Similar tasks are detection of popularity in social networks and detection of subjectivity of messages in this media. A fake news detection system aims to help users detect and filter out potentially deceptive news. The prediction of intentionally misleading news is based on the analysis of truthful and fraudulent previously reviewed news, i.e., annotated corpora.
The Spanish Fake News Corpus is a collection of news compiled from several web sources: established newspapers websites,media companies websites, special websites dedicated to validating fake news, websites designated by different journalists as sites that regularly publish fake news. The news were collected from January to July of 2018 and all of them were written in Mexican Spanish.
The corpus has 971 news collected from January to July, 2018, from different sources:
- Established newspapers websites,
- Media companies websites,
- Special websites dedicated to validating fake news,
- Websites designated by different journalists as sites that regularly publish fake news.
The corpus was tagged considering only two classes (true or fake), following a manual labeling process:
- A news is true if there is evidence that it has been published in reliable sites.
- A news is fake if there is news from reliable sites or specialized website in detection of deceptive content that contradicts it or no other evidence was found about the news besides the source.
- We collected the true-fake news pair of an event so there is a correlation of news in the corpus.
In order to avoid topic bias, the corpus covers news from 9 different topics: Science, Sport, Economy, Education, Entertainment, Politics, Health, Security, and Society. As it can be seen in the table below, the number of fake and true news is quite balanced. Approximately 70% will be used as training corpus (676 news), and the 30% as testing corpus (295 news).
The training corpus contains the following information:
- Category: Fake/ True
- Topic: Science/ Sport/ Economy/ Education/ Entertainment/ Politics, Health/ Security/ Society
- Headline: The title of the news.
- Text: The complete text of the news.
- Link: The URL where the news was published.
More information needed
## Training procedure
TBA
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| No log | 1.0 | 243 | 0.6282 | 0.7513 | 0.75 |
| No log | 2.0 | 486 | 0.9600 | 0.7346 | 0.7587 |
| 0.5099 | 3.0 | 729 | 1.2128 | 0.7656 | 0.7570 |
| 0.5099 | 4.0 | 972 | 1.4001 | 0.7606 | 0.7622 |
| 0.1949 | 5.0 | 1215 | 1.9748 | 0.6475 | 0.7220 |
| 0.1949 | 6.0 | 1458 | 1.7386 | 0.7706 | 0.7710 |
| 0.0263 | 7.0 | 1701 | 1.7474 | 0.7717 | 0.7797 |
| 0.0263 | 8.0 | 1944 | 1.8114 | 0.7695 | 0.7780 |
| 0.0046 | 9.0 | 2187 | 1.8444 | 0.7709 | 0.7797 |
| 0.0046 | 10.0 | 2430 | 1.8552 | 0.7709 | 0.7797 |
### Fast usage with HF `pipelines`
```python
from transformers import pipeline
ckpt = "Narrativaai/fake-news-detection-spanish"
classifier = pipeline("text-classification", model=ckpt)
headline = "Your headline"
text = "Your article text here..."
classifier(headline + " [SEP] " + text)
```
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
Created by: [Narrativa](https://www.narrativa.com/)
About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI |
anton-l/sew-mid-100k-ft-common-language | anton-l | 2021-10-28T10:52:41Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"sew",
"audio-classification",
"generated_from_trainer",
"dataset:common_language",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- common_language
metrics:
- accuracy
model-index:
- name: sew-mid-100k-ft-common-language
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-mid-100k-ft-common-language
This model is a fine-tuned version of [asapp/sew-mid-100k](https://huggingface.co/asapp/sew-mid-100k) on the common_language dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1189
- Accuracy: 0.3842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 4
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.608 | 1.0 | 173 | 3.7266 | 0.0540 |
| 3.1298 | 2.0 | 346 | 3.2180 | 0.1654 |
| 2.8481 | 3.0 | 519 | 2.9270 | 0.2019 |
| 2.648 | 4.0 | 692 | 2.6991 | 0.2619 |
| 2.5 | 5.0 | 865 | 2.5236 | 0.3004 |
| 2.2578 | 6.0 | 1038 | 2.4019 | 0.3212 |
| 2.2782 | 7.0 | 1211 | 2.1698 | 0.3658 |
| 2.1665 | 8.0 | 1384 | 2.1976 | 0.3631 |
| 2.1626 | 9.0 | 1557 | 2.1473 | 0.3791 |
| 2.1514 | 10.0 | 1730 | 2.1189 | 0.3842 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
furyhawk/t5-small-finetuned-bbc-headline | furyhawk | 2021-10-28T08:35:00Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-bbc-headline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-bbc-headline
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 167 | 3.6454 | 22.4311 | 5.9878 | 20.118 | 20.482 | 18.9009 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
quangtran199hust/layoutlmv2_e | quangtran199hust | 2021-10-28T08:17:21Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2_e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2_e
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 300
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.0+cu101
- Tokenizers 0.10.3
|
quangtran199hust/layoutlmv2_roige | quangtran199hust | 2021-10-28T07:32:00Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2_roige
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2_roige
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.0+cu101
- Datasets 1.14.0
- Tokenizers 0.10.3
|
aditeyabaral/sentencetransformer-indic-bert | aditeyabaral | 2021-10-28T02:17:50Z | 8 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"albert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-indic-bert
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-indic-bert')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-indic-bert')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-indic-bert')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-indic-bert)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: AlbertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
patrickvonplaten/sew-d-mid-400k-librispeech-clean-100h-ft | patrickvonplaten | 2021-10-27T23:44:33Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"sew-d",
"automatic-speech-recognition",
"librispeech_asr",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- librispeech_asr
- generated_from_trainer
model-index:
- name: sew-d-mid-400k-librispeech-clean-100h-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-d-mid-400k-librispeech-clean-100h-ft
This model is a fine-tuned version of [asapp/sew-d-mid-400k](https://huggingface.co/asapp/sew-d-mid-400k) on the LIBRISPEECH_ASR - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3540
- Wer: 1.0536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.319 | 0.11 | 100 | 11.0572 | 1.0 |
| 3.6726 | 0.22 | 200 | 4.2003 | 1.0 |
| 2.981 | 0.34 | 300 | 3.5742 | 0.9919 |
| 2.9411 | 0.45 | 400 | 3.2599 | 1.0 |
| 2.903 | 0.56 | 500 | 2.9350 | 1.0 |
| 2.8597 | 0.67 | 600 | 2.9514 | 1.0 |
| 2.7771 | 0.78 | 700 | 2.8521 | 1.0 |
| 2.7926 | 0.9 | 800 | 2.7821 | 1.0120 |
| 2.6623 | 1.01 | 900 | 2.7027 | 0.9924 |
| 2.5893 | 1.12 | 1000 | 2.6667 | 1.0240 |
| 2.5733 | 1.23 | 1100 | 2.6341 | 1.0368 |
| 2.5455 | 1.35 | 1200 | 2.5928 | 1.0411 |
| 2.4919 | 1.46 | 1300 | 2.5695 | 1.0817 |
| 2.5182 | 1.57 | 1400 | 2.5559 | 1.1072 |
| 2.4766 | 1.68 | 1500 | 2.5229 | 1.1257 |
| 2.4267 | 1.79 | 1600 | 2.4991 | 1.1151 |
| 2.3919 | 1.91 | 1700 | 2.4768 | 1.1139 |
| 2.3883 | 2.02 | 1800 | 2.4452 | 1.0636 |
| 2.3737 | 2.13 | 1900 | 2.4304 | 1.0594 |
| 2.3569 | 2.24 | 2000 | 2.4095 | 1.0539 |
| 2.3641 | 2.35 | 2100 | 2.3997 | 1.0511 |
| 2.3281 | 2.47 | 2200 | 2.3856 | 1.0414 |
| 2.2912 | 2.58 | 2300 | 2.3750 | 1.0696 |
| 2.3028 | 2.69 | 2400 | 2.3684 | 1.0436 |
| 2.2906 | 2.8 | 2500 | 2.3613 | 1.0538 |
| 2.2822 | 2.91 | 2600 | 2.3558 | 1.0506 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.13.4.dev0
- Tokenizers 0.10.3
|
anton-l/hubert-base-ft-keyword-spotting | anton-l | 2021-10-27T22:34:38Z | 7 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: hubert-base-ft-keyword-spotting
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-ft-keyword-spotting
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0774
- Accuracy: 0.9819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0422 | 1.0 | 399 | 0.8999 | 0.6918 |
| 0.3296 | 2.0 | 798 | 0.1505 | 0.9778 |
| 0.2088 | 3.0 | 1197 | 0.0901 | 0.9816 |
| 0.202 | 4.0 | 1596 | 0.0848 | 0.9813 |
| 0.1535 | 5.0 | 1995 | 0.0774 | 0.9819 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
jwuthri/autonlp-shipping_status_2-27366103 | jwuthri | 2021-10-27T21:34:42Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"unk",
"dataset:jwuthri/autonlp-data-shipping_status_2",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- jwuthri/autonlp-data-shipping_status_2
co2_eq_emissions: 32.912881644048
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 27366103
- CO2 Emissions (in grams): 32.912881644048
## Validation Metrics
- Loss: 0.18175844848155975
- Accuracy: 0.9437683592110785
- Precision: 0.9416809605488851
- Recall: 0.8459167950693375
- AUC: 0.9815242330050846
- F1: 0.8912337662337663
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/jwuthri/autonlp-shipping_status_2-27366103
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("jwuthri/autonlp-shipping_status_2-27366103", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("jwuthri/autonlp-shipping_status_2-27366103", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
huggingtweets/void_vomicae | huggingtweets | 2021-10-27T21:01:11Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/void_vomicae/1635368467642/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1452295981517742087/v8HfhHLT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">《 𝚟 o̶ 𝚒 𝚍 》</div>
<div style="text-align: center; font-size: 14px;">@void_vomicae</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 《 𝚟 o̶ 𝚒 𝚍 》.
| Data | 《 𝚟 o̶ 𝚒 𝚍 》 |
| --- | --- |
| Tweets downloaded | 2083 |
| Retweets | 417 |
| Short tweets | 422 |
| Tweets kept | 1244 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/fju0lp9t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @void_vomicae's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1wos3ytc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1wos3ytc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/void_vomicae')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
prajjwal1/bert-medium | prajjwal1 | 2021-10-27T18:30:16Z | 37,177 | 3 | transformers | [
"transformers",
"pytorch",
"BERT",
"MNLI",
"NLI",
"transformer",
"pre-training",
"en",
"arxiv:1908.08962",
"arxiv:2110.01518",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language:
- en
license:
- mit
tags:
- BERT
- MNLI
- NLI
- transformer
- pre-training
---
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert).
This is one of the smaller pre-trained BERT variants, together with [bert-tiny](https://huggingface.co/prajjwal1/bert-tiny), [bert-mini](https://huggingface.co/prajjwal1/bert-mini) and [bert-small](https://huggingface.co/prajjwal1/bert-small). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task.
If you use the model, please consider citing both the papers:
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{DBLP:journals/corr/abs-1908-08962,
author = {Iulia Turc and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {Well-Read Students Learn Better: The Impact of Student Initialization
on Knowledge Distillation},
journal = {CoRR},
volume = {abs/1908.08962},
year = {2019},
url = {http://arxiv.org/abs/1908.08962},
eprinttype = {arXiv},
eprint = {1908.08962},
timestamp = {Thu, 29 Aug 2019 16:32:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Config of this model:
- `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium)
Other models to check out:
- `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny)
- `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini)
- `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small)
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
|
Michael711/feinschwarz | Michael711 | 2021-10-27T18:28:16Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"de",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
license: mit
tags:
- generated_from_trainer
- de
model-index:
- name: feinesblack
results: []
---
# feinschwarz
This model is a fine-tuned version of [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2). The dataset was compiled from all texts of https://www.feinschwarz.net (as of October 2021). The homepage gathers essayistic texts on theological topics.
The model will be used to explore the challenges of text-generating AI for theology with a hands on approach. Can an AI generate theological knowledge? Is a text by Karl Rahner of more value than an AI-generated text? Can we even distinguish a Rahner text from an AI-generated text in the future? And the crucial question: Would it be bad if not?
The model is a very first attempt and in its current version certainly not yet a danger for academic theology 🤓
# Using the model
You can create text with the model using this code:
```python
from transformers import pipeline
pipe = pipeline('text-generation', model="Michael711/feinschwarz",
tokenizer="Michael711/feinschwarz")
text = pipe("Der Sinn des Lebens ist es", max_length=100)[0]["generated_text"]
print(text)
```
Have fun theologizing! |
prajjwal1/bert-mini | prajjwal1 | 2021-10-27T18:27:38Z | 98,112 | 20 | transformers | [
"transformers",
"pytorch",
"BERT",
"MNLI",
"NLI",
"transformer",
"pre-training",
"en",
"arxiv:1908.08962",
"arxiv:2110.01518",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language:
- en
license:
- mit
tags:
- BERT
- MNLI
- NLI
- transformer
- pre-training
---
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert).
This is one of the smaller pre-trained BERT variants, together with [bert-small](https://huggingface.co/prajjwal1/bert-small) and [bert-medium](https://huggingface.co/prajjwal1/bert-medium). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task.
If you use the model, please consider citing both the papers:
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{DBLP:journals/corr/abs-1908-08962,
author = {Iulia Turc and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {Well-Read Students Learn Better: The Impact of Student Initialization
on Knowledge Distillation},
journal = {CoRR},
volume = {abs/1908.08962},
year = {2019},
url = {http://arxiv.org/abs/1908.08962},
eprinttype = {arXiv},
eprint = {1908.08962},
timestamp = {Thu, 29 Aug 2019 16:32:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Config of this model:
`prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini)
Other models to check out:
- `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny)
- `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small)
- `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium)
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
|
patrickvonplaten/sew-d-small-100k-timit | patrickvonplaten | 2021-10-27T17:15:26Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"sew-d",
"automatic-speech-recognition",
"timit_asr",
"generated_from_trainer",
"dataset:timit_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: sew-d-small-100k-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-d-small-100k-timit
This model is a fine-tuned version of [asapp/sew-d-small-100k](https://huggingface.co/asapp/sew-d-small-100k) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7541
- Wer: 0.8061
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2068 | 0.69 | 100 | 4.0802 | 1.0 |
| 2.9805 | 1.38 | 200 | 2.9792 | 1.0 |
| 2.9781 | 2.07 | 300 | 2.9408 | 1.0 |
| 2.9655 | 2.76 | 400 | 2.9143 | 1.0 |
| 2.8953 | 3.45 | 500 | 2.8775 | 1.0 |
| 2.7718 | 4.14 | 600 | 2.7787 | 1.0 |
| 2.6711 | 4.83 | 700 | 2.6401 | 0.9786 |
| 2.6403 | 5.52 | 800 | 2.5435 | 1.0392 |
| 2.4052 | 6.21 | 900 | 2.4580 | 1.0706 |
| 2.1708 | 6.9 | 1000 | 2.2800 | 1.0090 |
| 2.2555 | 7.59 | 1100 | 2.1493 | 0.9579 |
| 2.3673 | 8.28 | 1200 | 2.0709 | 0.9051 |
| 2.091 | 8.97 | 1300 | 2.0258 | 0.8926 |
| 1.8433 | 9.66 | 1400 | 1.9645 | 0.8243 |
| 1.6824 | 10.34 | 1500 | 1.9211 | 0.8707 |
| 2.2282 | 11.03 | 1600 | 1.8914 | 0.8695 |
| 1.9027 | 11.72 | 1700 | 1.8718 | 0.8343 |
| 1.6303 | 12.41 | 1800 | 1.8646 | 0.8232 |
| 1.648 | 13.1 | 1900 | 1.8297 | 0.8177 |
| 2.0429 | 13.79 | 2000 | 1.8127 | 0.8642 |
| 1.8833 | 14.48 | 2100 | 1.8005 | 0.8307 |
| 1.5996 | 15.17 | 2200 | 1.7926 | 0.8467 |
| 1.4876 | 15.86 | 2300 | 1.7795 | 0.8341 |
| 1.8925 | 16.55 | 2400 | 1.7716 | 0.8199 |
| 1.814 | 17.24 | 2500 | 1.7846 | 0.8086 |
| 1.536 | 17.93 | 2600 | 1.7655 | 0.8019 |
| 1.4476 | 18.62 | 2700 | 1.7599 | 0.8070 |
| 1.7629 | 19.31 | 2800 | 1.7589 | 0.8119 |
| 1.7646 | 20.0 | 2900 | 1.7541 | 0.8061 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-large-xlsr-129-turkish-colab | patrickvonplaten | 2021-10-27T17:08:13Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-129-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-129-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-129](https://huggingface.co/facebook/wav2vec2-large-xlsr-129) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3149
- Wer: 0.4748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.4837 | 3.67 | 400 | 3.2526 | 1.0 |
| 3.0896 | 7.34 | 800 | 2.8037 | 1.0 |
| 1.5604 | 11.01 | 1200 | 0.5688 | 0.6613 |
| 0.6511 | 14.68 | 1600 | 0.3998 | 0.5580 |
| 0.4798 | 18.35 | 2000 | 0.3505 | 0.5118 |
| 0.4047 | 22.02 | 2400 | 0.3273 | 0.4858 |
| 0.3519 | 25.69 | 2800 | 0.3224 | 0.4796 |
| 0.343 | 29.36 | 3200 | 0.3149 | 0.4748 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
suwani/BERT_NER_Ep5_PAD_50-finetuned-ner | suwani | 2021-10-27T13:13:15Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: BERT_NER_Ep5_PAD_50-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_NER_Ep5_PAD_50-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3893
- Precision: 0.6540
- Recall: 0.7348
- F1: 0.6920
- Accuracy: 0.9006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 288 | 0.3705 | 0.5852 | 0.6215 | 0.6028 | 0.8793 |
| 0.4885 | 2.0 | 576 | 0.3351 | 0.5925 | 0.7317 | 0.6548 | 0.8865 |
| 0.4885 | 3.0 | 864 | 0.3196 | 0.6471 | 0.7138 | 0.6788 | 0.8994 |
| 0.2172 | 4.0 | 1152 | 0.3368 | 0.6454 | 0.7323 | 0.6861 | 0.8992 |
| 0.2172 | 5.0 | 1440 | 0.3491 | 0.6507 | 0.7312 | 0.6886 | 0.9008 |
| 0.1459 | 6.0 | 1728 | 0.3833 | 0.6715 | 0.7018 | 0.6863 | 0.9013 |
| 0.1045 | 7.0 | 2016 | 0.3893 | 0.6540 | 0.7348 | 0.6920 | 0.9006 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
doc2query/yahoo_answers-t5-base-v1 | doc2query | 2021-10-27T12:56:48Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- datasets/sentence-transformers/embedding-training-data
widget:
- text: "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
license: apache-2.0
---
# doc2query/yahoo_answers-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/yahoo_answers-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 111k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, answer) pairs from [Yahoo Answers](https://huggingface.co/datasets/sentence-transformers/embedding-training-data).
|
patrickvonplaten/unispeech-sat-base-timit-ft | patrickvonplaten | 2021-10-27T10:51:18Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"unispeech-sat",
"automatic-speech-recognition",
"timit_asr",
"generated_from_trainer",
"dataset:timit_asr",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: unispeech-sat-base-timit-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unispeech-sat-base-timit-ft
This model is a fine-tuned version of [microsoft/unispeech-sat-base](https://huggingface.co/microsoft/unispeech-sat-base) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6712
- Wer: 0.4101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.2582 | 0.69 | 100 | 3.1651 | 1.0 |
| 2.9542 | 1.38 | 200 | 2.9567 | 1.0 |
| 2.9656 | 2.07 | 300 | 2.9195 | 1.0 |
| 2.8946 | 2.76 | 400 | 2.8641 | 1.0 |
| 1.9305 | 3.45 | 500 | 1.7680 | 1.0029 |
| 1.0134 | 4.14 | 600 | 1.0184 | 0.6942 |
| 0.8355 | 4.83 | 700 | 0.7769 | 0.6080 |
| 0.8724 | 5.52 | 800 | 0.7182 | 0.6035 |
| 0.5619 | 6.21 | 900 | 0.6823 | 0.5406 |
| 0.4247 | 6.9 | 1000 | 0.6279 | 0.5237 |
| 0.4257 | 7.59 | 1100 | 0.6056 | 0.5000 |
| 0.5007 | 8.28 | 1200 | 0.5870 | 0.4918 |
| 0.3854 | 8.97 | 1300 | 0.6200 | 0.4804 |
| 0.264 | 9.66 | 1400 | 0.6030 | 0.4600 |
| 0.1989 | 10.34 | 1500 | 0.6049 | 0.4588 |
| 0.3196 | 11.03 | 1600 | 0.5946 | 0.4599 |
| 0.2622 | 11.72 | 1700 | 0.6282 | 0.4422 |
| 0.1697 | 12.41 | 1800 | 0.6559 | 0.4413 |
| 0.1464 | 13.1 | 1900 | 0.6349 | 0.4328 |
| 0.2277 | 13.79 | 2000 | 0.6133 | 0.4284 |
| 0.221 | 14.48 | 2100 | 0.6617 | 0.4219 |
| 0.1391 | 15.17 | 2200 | 0.6705 | 0.4235 |
| 0.112 | 15.86 | 2300 | 0.6207 | 0.4218 |
| 0.1717 | 16.55 | 2400 | 0.6749 | 0.4184 |
| 0.2081 | 17.24 | 2500 | 0.6756 | 0.4169 |
| 0.1244 | 17.93 | 2600 | 0.6750 | 0.4181 |
| 0.0978 | 18.62 | 2700 | 0.6500 | 0.4115 |
| 0.128 | 19.31 | 2800 | 0.6750 | 0.4106 |
| 0.1791 | 20.0 | 2900 | 0.6712 | 0.4101 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/unispeech-large-1500h-cv-timit | patrickvonplaten | 2021-10-27T10:50:16Z | 5,699 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"unispeech",
"automatic-speech-recognition",
"timit_asr",
"generated_from_trainer",
"dataset:timit_asr",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: unispeech-large-1500h-cv-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unispeech-large-1500h-cv-timit
This model is a fine-tuned version of [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3099
- Wer: 0.2196
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.64 | 0.69 | 100 | 3.9717 | 0.9981 |
| 2.6793 | 1.38 | 200 | 2.6264 | 1.0 |
| 1.2221 | 2.07 | 300 | 0.9999 | 0.7167 |
| 0.9009 | 2.76 | 400 | 0.6509 | 0.5570 |
| 0.4352 | 3.45 | 500 | 0.4682 | 0.4332 |
| 0.227 | 4.14 | 600 | 0.3661 | 0.3565 |
| 0.2169 | 4.83 | 700 | 0.3244 | 0.3203 |
| 0.2687 | 5.52 | 800 | 0.3137 | 0.2981 |
| 0.127 | 6.21 | 900 | 0.3220 | 0.2828 |
| 0.0922 | 6.9 | 1000 | 0.3075 | 0.2708 |
| 0.0965 | 7.59 | 1100 | 0.2779 | 0.2576 |
| 0.1298 | 8.28 | 1200 | 0.3111 | 0.2480 |
| 0.0855 | 8.97 | 1300 | 0.3021 | 0.2421 |
| 0.0629 | 9.66 | 1400 | 0.3122 | 0.2511 |
| 0.0471 | 10.34 | 1500 | 0.2965 | 0.2368 |
| 0.0871 | 11.03 | 1600 | 0.3247 | 0.2387 |
| 0.0503 | 11.72 | 1700 | 0.3359 | 0.2363 |
| 0.0402 | 12.41 | 1800 | 0.2976 | 0.2332 |
| 0.0336 | 13.1 | 1900 | 0.3139 | 0.2321 |
| 0.0634 | 13.79 | 2000 | 0.3188 | 0.2309 |
| 0.0429 | 14.48 | 2100 | 0.3145 | 0.2335 |
| 0.028 | 15.17 | 2200 | 0.3244 | 0.2242 |
| 0.0255 | 15.86 | 2300 | 0.2914 | 0.2196 |
| 0.0406 | 16.55 | 2400 | 0.3249 | 0.2202 |
| 0.0512 | 17.24 | 2500 | 0.3037 | 0.2198 |
| 0.0269 | 17.93 | 2600 | 0.3218 | 0.2242 |
| 0.0287 | 18.62 | 2700 | 0.3106 | 0.2185 |
| 0.0319 | 19.31 | 2800 | 0.3124 | 0.2217 |
| 0.0494 | 20.0 | 2900 | 0.3099 | 0.2196 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-base-timit-fine-tuned | patrickvonplaten | 2021-10-27T10:49:08Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"timit_asr",
"generated_from_trainer",
"dataset:timit_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: wav2vec2-base-timit-fine-tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-fine-tuned
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3457
- Wer: 0.2151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1621 | 0.69 | 100 | 3.1102 | 1.0 |
| 2.9592 | 1.38 | 200 | 2.9603 | 1.0 |
| 2.9116 | 2.07 | 300 | 2.8921 | 1.0 |
| 2.1332 | 2.76 | 400 | 1.9718 | 0.9958 |
| 0.8477 | 3.45 | 500 | 0.7813 | 0.5237 |
| 0.4251 | 4.14 | 600 | 0.5166 | 0.3982 |
| 0.3743 | 4.83 | 700 | 0.4400 | 0.3578 |
| 0.4194 | 5.52 | 800 | 0.4077 | 0.3370 |
| 0.23 | 6.21 | 900 | 0.4018 | 0.3142 |
| 0.1554 | 6.9 | 1000 | 0.3623 | 0.2995 |
| 0.1511 | 7.59 | 1100 | 0.3433 | 0.2697 |
| 0.1983 | 8.28 | 1200 | 0.3539 | 0.2715 |
| 0.1443 | 8.97 | 1300 | 0.3622 | 0.2551 |
| 0.0971 | 9.66 | 1400 | 0.3580 | 0.2519 |
| 0.0764 | 10.34 | 1500 | 0.3529 | 0.2437 |
| 0.1203 | 11.03 | 1600 | 0.3455 | 0.2431 |
| 0.0881 | 11.72 | 1700 | 0.3648 | 0.2415 |
| 0.0521 | 12.41 | 1800 | 0.3564 | 0.2320 |
| 0.0434 | 13.1 | 1900 | 0.3485 | 0.2270 |
| 0.0864 | 13.79 | 2000 | 0.3517 | 0.2228 |
| 0.0651 | 14.48 | 2100 | 0.3506 | 0.2285 |
| 0.0423 | 15.17 | 2200 | 0.3428 | 0.2247 |
| 0.0302 | 15.86 | 2300 | 0.3372 | 0.2198 |
| 0.0548 | 16.55 | 2400 | 0.3496 | 0.2196 |
| 0.0674 | 17.24 | 2500 | 0.3407 | 0.2166 |
| 0.0291 | 17.93 | 2600 | 0.3512 | 0.2171 |
| 0.0298 | 18.62 | 2700 | 0.3363 | 0.2158 |
| 0.0419 | 19.31 | 2800 | 0.3493 | 0.2145 |
| 0.046 | 20.0 | 2900 | 0.3457 | 0.2151 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
suwani/BERT_NER_Ep6_PAD_50-finetuned-ner | suwani | 2021-10-27T10:28:40Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: BERT_NER_Ep6_PAD_50-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_NER_Ep6_PAD_50-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3741
- Precision: 0.6510
- Recall: 0.7399
- F1: 0.6926
- Accuracy: 0.9020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 288 | 0.3648 | 0.5949 | 0.5907 | 0.5928 | 0.8792 |
| 0.4815 | 2.0 | 576 | 0.3400 | 0.5860 | 0.7390 | 0.6536 | 0.8867 |
| 0.4815 | 3.0 | 864 | 0.3217 | 0.6404 | 0.7129 | 0.6747 | 0.8992 |
| 0.2206 | 4.0 | 1152 | 0.3430 | 0.6413 | 0.7321 | 0.6837 | 0.8995 |
| 0.2206 | 5.0 | 1440 | 0.3560 | 0.6464 | 0.7377 | 0.6890 | 0.9010 |
| 0.1487 | 6.0 | 1728 | 0.3741 | 0.6510 | 0.7399 | 0.6926 | 0.9020 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
doc2query/S2ORC-t5-base-v1 | doc2query | 2021-10-27T10:04:09Z | 35 | 4 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:S2ORC",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- S2ORC
widget:
- text: "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
license: apache-2.0
---
# doc2query/S2ORC-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/S2ORC-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 156k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, abstract) pairs from [S2ORC](https://github.com/allenai/s2orc).
|
doc2query/reddit-t5-base-v1 | doc2query | 2021-10-27T09:56:25Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- datasets/sentence-transformers/reddit-title-body
widget:
- text: "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
license: apache-2.0
---
# doc2query/reddit-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/reddit-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 533k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, body) from Reddit.
|
VariableZee/DialoGPT-small-ivylia03 | VariableZee | 2021-10-27T08:50:29Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
tags:
- conversational
---
|
espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char | espnet | 2021-10-27T02:55:53Z | 3 | 11 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"zh",
"dataset:wenetspeech",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: zh
datasets:
- wenetspeech
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char`
This model was trained by Pengcheng Guo using wenetspeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 5c21f63e45e0961a5d817017c282b0cafd68a3aa
pip install -e .
cd egs2/wenetspeech/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Oct 6 15:11:20 CST 2021`
- python version: `3.8.11 (default, Aug 3 2021, 15:09:35) [GCC 7.5.0]`
- espnet version: `espnet 0.10.2a1`
- pytorch version: `pytorch 1.9.0`
- Git hash: ``
- Commit date: ``
## asr_train_asr_conformer_raw_zh_char
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/aishell_test|7176|7176|67.1|32.9|0.0|0.1|33.0|32.9|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/dev|13825|16684|32.1|54.1|13.8|0.1|68.0|64.2|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/test_meeting|8370|8599|13.4|84.6|2.0|0.1|86.7|86.8|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/test_net|24774|25995|46.2|50.4|3.4|1.1|54.9|52.5|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/aishell_test|7176|104765|96.3|3.6|0.1|0.2|3.9|32.9|
|decode_asr_rnn_asr_model_valid.acc.ave_10bestdev|13825|333357|90.7|3.4|5.9|0.4|9.7|64.2|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/test_meeting|8370|220614|84.6|5.0|10.4|0.5|15.9|86.8|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/test_net|24774|416968|91.8|5.3|2.9|0.6|8.8|52.5|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_raw_zh_char
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 44205
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 30
patience: null
val_scheduler_criterion:
- valid
- acc
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 30000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_zh_char/train/speech_shape
- exp/asr_stats_raw_zh_char/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_zh_char/valid/speech_shape
- exp/asr_stats_raw_zh_char/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 51200
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_l/wav.scp
- speech
- sound
- - dump/raw/train_l/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0015
scheduler: warmuplr
scheduler_conf:
warmup_steps: 30000
token_list:
- <blank>
- <unk>
- 的
- 我
- 是
- 你
- 了
- 一
- 不
- 这
- 个
- 有
- 就
- 们
- 在
- 他
- 人
- 么
- 来
- 说
- 那
- 要
- 好
- 啊
- 大
- 到
- 上
- 也
- 没
- 都
- 去
- 能
- 子
- 会
- 为
- 得
- 时
- 还
- 可
- 以
- 什
- 家
- 后
- 看
- 呢
- 对
- 事
- 天
- 下
- 过
- 想
- 多
- 小
- 出
- 自
- 儿
- 生
- 给
- 里
- 现
- 着
- 然
- 吧
- 样
- 道
- 吗
- 心
- 跟
- 中
- 很
- 点
- 年
- 和
- 地
- 怎
- 知
- 十
- 老
- 当
- 把
- 话
- 别
- 所
- 之
- 情
- 实
- 开
- 面
- 回
- 行
- 国
- 做
- 己
- 经
- 如
- 真
- 起
- 候
- 些
- 让
- 发
- 她
- 觉
- 但
- 成
- 定
- 意
- 二
- 长
- 最
- 方
- 三
- 前
- 因
- 用
- 呀
- 种
- 只
- 走
- 其
- 问
- 再
- 果
- 而
- 分
- 两
- 打
- 学
- 间
- 您
- 本
- 于
- 明
- 手
- 公
- 听
- 比
- 作
- 女
- 太
- 今
- 从
- 关
- 妈
- 同
- 法
- 动
- 已
- 见
- 才
- 孩
- 感
- 吃
- 常
- 次
- 它
- 进
- 先
- 找
- 身
- 全
- 理
- 又
- 力
- 正
- 主
- 应
- 高
- 被
- 钱
- 快
- 等
- 头
- 重
- 车
- 谢
- 日
- 东
- 放
- 无
- 工
- 咱
- 哪
- 五
- 者
- 像
- 西
- 该
- 干
- 相
- 信
- 机
- 百
- 特
- 业
- 活
- 师
- 边
- 爱
- 友
- 新
- 外
- 位
- 更
- 直
- 几
- 第
- 非
- 四
- 题
- 接
- 少
- 哥
- 死
- 完
- 刚
- 电
- 气
- 安
- 爸
- 白
- 告
- 美
- 解
- 叫
- 月
- 带
- 欢
- 谁
- 体
- 喜
- 部
- 场
- 姐
- 军
- 万
- 结
- 合
- 难
- 八
- 每
- 目
- 亲
- 朋
- 认
- 总
- 加
- 通
- 办
- 马
- 件
- 受
- 任
- 请
- 住
- 王
- 思
- 门
- 名
- 平
- 系
- 文
- 帮
- 路
- 变
- 记
- 水
- 九
- 算
- 将
- 口
- 男
- 度
- 报
- 六
- 张
- 管
- 够
- 性
- 表
- 提
- 何
- 讲
- 期
- 拿
- 保
- 嘛
- 司
- 原
- 始
- 此
- 诉
- 处
- 清
- 内
- 产
- 金
- 晚
- 早
- 交
- 离
- 眼
- 队
- 七
- 入
- 山
- 代
- 市
- 海
- 物
- 零
- 望
- 世
- 婚
- 命
- 越
- 收
- 向
- 花
- 房
- 错
- 节
- 父
- 反
- 战
- 买
- 量
- 或
- 员
- 号
- 千
- 怕
- 底
- 且
- 品
- 民
- 化
- 爷
- 并
- 与
- 服
- 需
- 资
- 求
- 教
- 娘
- 医
- 数
- 院
- 书
- 利
- 往
- 确
- 各
- 单
- 风
- 送
- 必
- 条
- 包
- 准
- 光
- 整
- 病
- 弟
- 嗯
- 计
- 照
- 强
- 务
- 影
- 城
- 夫
- 俩
- 决
- 声
- 连
- 乐
- 息
- 远
- 北
- 至
- 饭
- 留
- 宝
- 神
- 近
- 考
- 备
- 案
- 界
- 容
- 况
- 母
- 较
- 持
- 证
- 选
- 制
- 程
- 喝
- 害
- 字
- 失
- 立
- 台
- 玩
- 查
- 块
- 便
- 挺
- 段
- 周
- 由
- 句
- 紧
- 李
- 据
- 杀
- 南
- 商
- 识
- 网
- 式
- 愿
- 传
- 流
- 消
- 伤
- 根
- 演
- 希
- 故
- 坐
- 建
- 注
- 许
- 调
- 共
- 空
- 半
- 却
- 酒
- 联
- 微
- 言
- 肯
- 赶
- 跑
- 笑
- 区
- 岁
- 红
- 达
- 官
- 轻
- 易
- 火
- 线
- 拉
- 首
- 导
- 团
- 慢
- 指
- 写
- 深
- 论
- 片
- 改
- 啥
- 满
- 步
- 音
- 功
- 聊
- 客
- 未
- 格
- 基
- 睡
- 观
- 份
- 视
- 色
- 价
- 政
- 转
- 终
- 复
- 啦
- 呃
- 阿
- 倒
- 义
- 警
- 林
- 使
- 科
- 运
- 苦
- 待
- 费
- 随
- 救
- 试
- 班
- 敢
- 精
- 及
- 术
- 造
- 续
- 养
- 展
- 答
- 绝
- 众
- 站
- 妹
- 差
- 谈
- 卖
- 播
- 创
- 领
- 象
- 志
- 投
- 习
- 兄
- 元
- 皇
- 专
- 态
- 急
- 局
- 兴
- 楚
- 飞
- 护
- 装
- 热
- 奶
- 取
- 设
- 游
- 读
- 福
- 药
- 担
- 历
- 忙
- 规
- 掉
- 刘
- 切
- 断
- 尽
- 社
- 久
- 支
- 板
- 星
- 姑
- 曾
- 突
- 除
- 华
- 责
- 排
- 京
- 值
- 士
- 统
- 换
- 德
- 衣
- 组
- 示
- 脸
- 刻
- 黑
- 遇
- 虽
- 顾
- 戏
- 怪
- 懂
- 叔
- 夜
- 陈
- 亮
- 江
- 兵
- 负
- 布
- 青
- 落
- 推
- 假
- 类
- 令
- 技
- 英
- 质
- 黄
- 治
- 形
- 助
- 球
- 歌
- 参
- 广
- 继
- 简
- 画
- 奇
- 陪
- 阳
- 险
- 须
- 念
- 迎
- 幸
- 抓
- 破
- 另
- 争
- 竟
- 户
- 律
- 择
- 究
- 龙
- 足
- 店
- 脑
- 斯
- 党
- 权
- 约
- 疑
- 议
- 严
- 密
- 克
- 存
- 穿
- 承
- 校
- 击
- 际
- 标
- 云
- 营
- 察
- 超
- 食
- 集
- 级
- 礼
- 静
- 背
- 武
- 初
- 拍
- 梦
- 验
- 响
- 角
- 石
- 股
- 追
- 怀
- 婆
- 适
- 独
- 忘
- 血
- 醒
- 具
- 罪
- 享
- 毛
- 香
- 状
- 配
- 靠
- 语
- 仅
- 低
- 细
- 米
- 既
- 钟
- 极
- 停
- 味
- 则
- 油
- 器
- 楼
- 菜
- 研
- 互
- 压
- 贵
- 村
- 属
- 派
- 乎
- 坏
- 控
- 显
- 图
- 双
- 职
- 永
- 哈
- 鬼
- 依
- 料
- 按
- 府
- 坚
- 某
- 甚
- 居
- 练
- 顺
- 模
- 即
- 州
- 引
- 乱
- 速
- 庭
- 朝
- 室
- 似
- 付
- 划
- 尔
- 境
- 犯
- 烦
- 环
- 伙
- 巴
- 春
- 古
- 妇
- 势
- 款
- 增
- 财
- 河
- 守
- 虑
- 汉
- 枪
- 妻
- 爹
- 弄
- 委
- 企
- 冲
- 置
- 麻
- 育
- 项
- 防
- 胡
- 杨
- 致
- 辈
- 括
- 毕
- 卫
- 修
- 史
- 型
- 牌
- 嘴
- 苏
- 群
- 举
- 痛
- 座
- 概
- 搞
- 围
- 土
- 毒
- 唱
- 冷
- 累
- 玉
- 获
- 误
- 跳
- 脚
- 雨
- 剧
- 休
- 皮
- 止
- 济
- 肉
- 丽
- 借
- 铁
- 牛
- 哭
- 招
- 闹
- 银
- 优
- 温
- 狗
- 退
- 洗
- 拜
- 否
- 票
- 偷
- 抱
- 博
- 般
- 效
- 套
- 维
- 普
- 康
- 富
- 宫
- 索
- 罗
- 堂
- 智
- 省
- 介
- 孙
- 灵
- 评
- 藏
- 称
- 课
- 货
- 姨
- 艺
- 骗
- 雪
- 赛
- 景
- 昨
- 健
- 鱼
- 激
- 危
- 熟
- 圈
- 闻
- 监
- 替
- 君
- 恋
- 良
- 掌
- 草
- 松
- 供
- 努
- 例
- 短
- 帝
- 姓
- 率
- 族
- 亿
- 赵
- 蛋
- 判
- 预
- 频
- 卡
- 架
- 纪
- 弃
- 秀
- 兰
- 层
- 检
- 伴
- 抗
- 讨
- 源
- 夏
- 咋
- 惊
- 录
- 善
- 补
- 刀
- 充
- 升
- 章
- 午
- 若
- 私
- 吴
- 素
- 旅
- 临
- 挑
- 唐
- 露
- 树
- 斗
- 舞
- 左
- 叶
- 副
- 晓
- 厂
- 弹
- 印
- 秘
- 屋
- 田
- 木
- 困
- 园
- 封
- 逃
- 批
- 馆
- 疼
- 败
- 陆
- 敌
- 散
- 采
- 翻
- 缺
- 胜
- 免
- 销
- 鸡
- 降
- 波
- 测
- 限
- 释
- 忍
- 归
- 床
- 餐
- 茶
- 码
- 宁
- 乡
- 辛
- 彩
- 亚
- 浪
- 漂
- 庆
- 训
- 范
- 烧
- 词
- 吵
- 媳
- 探
- 余
- 恐
- 积
- 农
- 遍
- 舒
- 顶
- 构
- 呼
- 丝
- 执
- 雅
- 惯
- 右
- 脱
- 恩
- 野
- 折
- 趣
- 笔
- 谓
- 盘
- 贝
- 宣
- 绍
- 嘉
- 宋
- 抢
- 嫌
- 尊
- 碰
- 绪
- 丢
- 厉
- 沙
- 轮
- 施
- 织
- 托
- 县
- 策
- 杯
- 逼
- 傻
- 束
- 街
- 疗
- 益
- 骨
- 迷
- 姻
- 恶
- 默
- 寻
- 搜
- 哦
- 材
- 吸
- 劳
- 勇
- 占
- 暴
- 船
- 徐
- 虎
- 融
- 异
- 审
- 攻
- 雷
- 稳
- 呗
- 输
- 睛
- 臣
- 端
- 威
- 秋
- 欧
- 冰
- 韩
- 减
- <space>
- 操
- 混
- 汽
- 暗
- 隐
- 嫂
- 沉
- 烟
- 顿
- 凭
- 洋
- 嫁
- 购
- 粉
- 遗
- 杂
- 协
- 尝
- 键
- 亡
- 秦
- 纸
- 拥
- 革
- 猫
- 伯
- 祝
- 签
- 傅
- 牙
- 湖
- 莫
- 杰
- 旁
- 港
- 劲
- 宗
- 偏
- 触
- 唯
- 吓
- 辆
- 沈
- 列
- 梅
- 祖
- 舍
- 尤
- 赚
- 疫
- 腾
- 拼
- 奖
- 刺
- 齐
- 诚
- 媒
- 戴
- 账
- 炸
- 骂
- 避
- 麦
- 爆
- 域
- 烈
- 暖
- 季
- 猜
- 佳
- 净
- 腿
- 磨
- 曲
- 虚
- 阵
- 荣
- 访
- 核
- 鲜
- 阶
- 镇
- 灯
- 估
- 剩
- 硬
- 租
- 敬
- 损
- 惜
- 挂
- 董
- 巨
- 忆
- 登
- 丈
- 帅
- 童
- 耳
- 央
- 软
- 移
- 略
- 额
- 厅
- 挥
- 透
- 络
- 弱
- 珍
- 恨
- 巧
- 丁
- 谋
- 孤
- 豆
- 诗
- 冒
- 狼
- 渐
- 峰
- 售
- 凡
- 聚
- 洞
- 抽
- 劝
- 闭
- 摆
- 冬
- 凶
- 魔
- 灭
- 雄
- 挣
- 搬
- 龄
- 朱
- 编
- 航
- 席
- 驾
- 授
- 鼓
- 握
- 隔
- 猪
- 仙
- 颜
- 镜
- 胖
- 赢
- 仇
- 晨
- 欺
- 刑
- 谷
- 旦
- 亏
- 盖
- 症
- 喊
- 蓝
- 讯
- 殿
- 梁
- 躲
- 旧
- 针
- 箱
- 丰
- 洲
- 鞋
- 征
- 蒙
- 伟
- 袋
- 庄
- 患
- 怨
- 佛
- 稍
- 朵
- 纳
- 吉
- 川
- 典
- 迹
- 瑞
- 废
- 搭
- 涨
- 汤
- 启
- 桌
- 摸
- 赔
- 宜
- 纯
- 贴
- 聪
- 熊
- 延
- 瓶
- 版
- 缘
- 距
- 甜
- 析
- 盛
- 孕
- 彻
- 桥
- 尚
- 染
- 撞
- 途
- 沟
- 疯
- 敏
- 瞧
- 漫
- 胆
- 诺
- 刷
- 饿
- 仍
- 喂
- 辞
- 迟
- 淡
- 郑
- 歉
- 扰
- 宾
- 圆
- 赞
- 肚
- 慧
- 泪
- 吹
- 拖
- 遭
- 穷
- 罚
- 悔
- 绿
- 忽
- 唉
- 毫
- 绩
- 暂
- 射
- 岛
- 拾
- 珠
- 欠
- 忠
- 陷
- 阴
- 尼
- 悲
- 糊
- 撤
- 徒
- 剑
- 币
- 娜
- 违
- 泡
- 仗
- 粮
- 培
- 趟
- 菲
- 拒
- 棒
- 脾
- 赏
- 窗
- 宇
- 闲
- 附
- 踏
- 彼
- 涉
- 锁
- 撒
- 魂
- 羊
- 述
- 屈
- 库
- 滚
- 凉
- 颗
- 寒
- 呐
- 墙
- 娃
- 序
- 迪
- 丹
- 扬
- 瞎
- 递
- 凤
- 碗
- 屁
- 锅
- 奔
- 幅
- 债
- 糖
- 奋
- 汇
- 圣
- 订
- 偶
- 残
- 宽
- 狂
- 鼠
- 狠
- 幕
- 固
- 竞
- 蜜
- 吐
- 摄
- 骑
- 篇
- 毁
- 尾
- 摇
- 奥
- 厚
- 妖
- 禁
- 逐
- 均
- 尸
- 冠
- 阅
- 辑
- 捕
- 载
- 郭
- 俺
- 诊
- 欲
- 扎
- 鸟
- 柔
- 迫
- 豪
- 踪
- 扔
- 碎
- 末
- 娶
- 扫
- 朕
- 励
- 乔
- 闺
- 档
- 厨
- 倍
- 湾
- 郎
- 幼
- 纷
- 奴
- 阻
- 饮
- 怒
- 妙
- 琴
- 曹
- 脏
- 牵
- 瓜
- 滴
- 炮
- 缓
- 含
- 献
- 柜
- 仔
- 艾
- 潜
- 赌
- 震
- 础
- 添
- 兔
- 焦
- 躺
- 森
- 肥
- 洪
- 孝
- 偿
- 悉
- 撑
- 甘
- 桃
- 苹
- 魏
- 鲁
- 池
- 狱
- 厌
- 纠
- 朗
- 贷
- 铺
- 殊
- 坦
- 爬
- 擦
- 酸
- 钢
- 咖
- 瞒
- 蛮
- 谅
- 耐
- 申
- 夸
- 欣
- 诶
- 驶
- 屏
- 烂
- 凌
- 甲
- 胎
- 仪
- 貌
- 番
- 涂
- 抬
- 舅
- 扯
- 鹿
- 摩
- 诸
- 秒
- 泽
- 埋
- 蒋
- 隆
- 赖
- 奸
- 咬
- 恢
- 宿
- 乖
- 邀
- 抵
- 臭
- 闪
- 莉
- 熬
- 链
- 盯
- 侦
- 灾
- 堆
- 灰
- 卷
- 盾
- 障
- 截
- 恰
- 佩
- 戒
- 莲
- 裁
- 芬
- 戚
- 匪
- 滑
- 趁
- 询
- 绑
- 辣
- 挖
- 俗
- 祸
- 符
- 扣
- 插
- 仁
- 壁
- 腰
- 斤
- 燕
- 筑
- 柱
- 夺
- 援
- 映
- 壮
- 杜
- 摔
- 润
- 恭
- 乌
- 慰
- 啡
- 著
- 井
- 跌
- 牢
- 荐
- 拔
- 惹
- 侯
- 玲
- 炎
- 胸
- 旗
- 牲
- 喽
- 涛
- 衡
- 矛
- 伍
- 贤
- 惨
- 糟
- 慌
- 伏
- 醉
- 仓
- 拆
- 乘
- 疾
- 鼻
- 潮
- 予
- 奉
- 伦
- 劫
- 伊
- 怜
- 孟
- 肺
- 忧
- 倾
- 矩
- 荒
- 奏
- 塔
- 塞
- 迅
- 轨
- 瞬
- 丫
- 狐
- 叛
- 繁
- 眠
- 孔
- 谱
- 悄
- 泰
- 姜
- 侵
- 妃
- 冯
- 柳
- 洛
- 岸
- 凯
- 陛
- 幺
- 仿
- 氏
- 窝
- 曼
- 挡
- 浩
- 盟
- 轩
- 牺
- 贫
- 绕
- 谎
- 措
- 扶
- 梯
- 炼
- 勤
- 霸
- 横
- 罢
- 呆
- 税
- 桂
- 哎
- 慕
- 植
- 允
- 荡
- 洁
- 肖
- 耗
- 贼
- 艰
- 贺
- 幻
- 饱
- 胃
- 袭
- 廷
- 泥
- 丧
- 缩
- 砸
- 姥
- 拦
- 扮
- 糕
- 肤
- 猴
- 脆
- 炒
- 耀
- 盗
- 邓
- 扩
- 纵
- 振
- 敲
- 鹏
- 姆
- 湿
- 丑
- 召
- 苗
- 伸
- 惑
- 碍
- 萨
- 瘦
- 闯
- 迁
- 坑
- 弯
- 卑
- 尖
- 遥
- 侠
- 犹
- 押
- 冤
- 钻
- 汗
- 闷
- 邻
- 淘
- 抛
- 妆
- 贾
- 侧
- 傲
- 描
- 耍
- 猛
- 薇
- 裤
- 憾
- 督
- 贸
- 墨
- 勒
- 薄
- 嘞
- 渡
- 紫
- 悟
- 锦
- 溜
- 逆
- 惠
- 辉
- 贪
- 圾
- 垃
- 券
- 燃
- 虫
- 悠
- 伪
- 尿
- 懒
- 俊
- 寄
- 歇
- 盒
- 潘
- 储
- 愈
- 脉
- 粗
- 返
- 昌
- 泉
- 蔡
- 愧
- 赤
- 岳
- 婷
- 猎
- 饼
- 肩
- 勾
- 巡
- 竹
- 催
- 陌
- 踩
- 促
- 扭
- 堵
- 酷
- 芳
- 逛
- 陵
- 耽
- 凑
- 寿
- 缝
- 剪
- 郁
- 宅
- 抚
- 筹
- 沿
- 烤
- 奈
- 挨
- 晋
- 崩
- 浮
- 阁
- 彭
- 裂
- 崇
- 眉
- 桑
- 辩
- 漏
- 稀
- 液
- 汪
- 袁
- 掩
- 浑
- 坡
- 晕
- 缠
- 仰
- 挤
- 睁
- 羽
- 岗
- 捡
- 墓
- 综
- 矿
- 妥
- 厕
- 辱
- 惧
- 逗
- 帽
- 寸
- 搁
- 跨
- 渴
- 饰
- 璃
- 琳
- 爽
- 愤
- 饶
- 卧
- 誓
- 滋
- 鉴
- 腐
- 鸭
- 蛇
- 妮
- 莱
- 哟
- 钥
- 甄
- 肠
- 畅
- 慎
- 悬
- 逻
- 胁
- 辰
- 呈
- 棋
- 寨
- 萌
- 覆
- 姚
- 津
- 笨
- 轰
- 乏
- 匙
- 摊
- 陶
- 恼
- 昏
- 抑
- 姿
- 愁
- 誉
- 椅
- 羞
- 澡
- 踢
- 晶
- 萧
- 箭
- 罩
- 宠
- 羡
- 亦
- 祥
- 串
- 昆
- 煮
- 疏
- 纹
- 泄
- 痕
- 喷
- 册
- 跃
- 卢
- 岩
- 跪
- 兽
- 桶
- 飘
- 漠
- 堪
- 哄
- 寂
- 崔
- 腹
- 癌
- 拳
- 驻
- 霍
- 拨
- 诞
- 捐
- 御
- 榜
- 唤
- 荷
- 径
- 署
- 锋
- 玛
- 匆
- 恒
- 吕
- 邮
- 圳
- 黎
- 掏
- 莎
- 寞
- 佐
- 诈
- 牧
- 盐
- 叹
- 尬
- 匹
- 狸
- 膀
- 谨
- 尘
- 驱
- 乳
- 晒
- 宴
- 辜
- 哲
- 铜
- 薪
- 盆
- 割
- 忌
- 旋
- 翼
- 哀
- 咨
- 遵
- 夹
- 侣
- 译
- 胞
- 浅
- 邦
- 俄
- 弗
- 豫
- 甭
- 乃
- 扛
- 杭
- 瓦
- 槽
- 污
- 尴
- 琢
- 枝
- 详
- 柴
- 佑
- 盼
- 抖
- 惩
- 捷
- 葬
- 贡
- 艳
- 塑
- 茫
- 叨
- 浓
- 拐
- 捉
- 憋
- 稿
- 苍
- 葛
- 扑
- 娱
- 赋
- 杆
- 绘
- 聆
- 肌
- 婴
- 摘
- 岂
- 呵
- 冻
- 泳
- 揭
- 坤
- 盈
- 毅
- 撕
- 娇
- 唠
- 宏
- 吊
- 籍
- 楠
- 肃
- 抹
- 玄
- 湘
- 迈
- 酱
- 骄
- 咐
- 扇
- 幽
- 疲
- 邪
- 吞
- 趋
- 尺
- 玻
- 溃
- 诱
- 翠
- 兼
- 辅
- 岭
- 栏
- 柏
- 址
- 寺
- 逢
- 琪
- 慈
- 愣
- 契
- 渠
- 齿
- 薛
- 拟
- 填
- 坛
- 抄
- 痴
- 绳
- 役
- 擅
- 晃
- 斌
- 愉
- 届
- 悦
- 旨
- 砍
- 弥
- 挽
- 肝
- 鸣
- 庙
- 烫
- 聘
- 皆
- 婶
- 舌
- 枉
- 赫
- 蓉
- 瞅
- 阔
- 俱
- 循
- 鸿
- 彪
- 伺
- 堡
- 谦
- 剂
- 洒
- 赴
- 妨
- 磊
- 嘱
- 蝶
- 兆
- 豹
- 绣
- 篮
- 锻
- 陕
- 霉
- 涵
- 疆
- 丸
- 蠢
- 铃
- 浙
- 庞
- 萝
- 泛
- 芝
- 煤
- 甩
- 氛
- 页
- 逸
- 袖
- 携
- 躁
- 夕
- 匠
- 蹈
- 坊
- 雾
- 蹲
- 颠
- 脂
- 塌
- 棵
- 鹰
- 澳
- 哇
- 筋
- 纽
- 脖
- 棉
- 渣
- 寡
- 践
- 侄
- 披
- 魅
- 虹
- 肿
- 胶
- 霞
- 罐
- 晴
- 拓
- 卿
- 耻
- 砖
- 宪
- 歪
- 兜
- 衰
- 捧
- 歹
- 雕
- 穆
- 栋
- 瑶
- 毙
- 衷
- 膜
- 囊
- 莹
- 垫
- 吻
- 嘟
- 舰
- 虾
- 壳
- 穴
- 勉
- 裙
- 旺
- 柯
- 磕
- 贩
- 腻
- 蹦
- 卜
- 茹
- 驴
- 臂
- 删
- 菌
- 妾
- 蜂
- 祭
- 菊
- 咸
- 淑
- 笼
- 涯
- 碧
- 宙
- 骚
- 皓
- 赐
- 晰
- 腔
- 龟
- 泼
- 鹅
- 啪
- 巾
- 炉
- 沾
- 醋
- 澜
- 朴
- 棍
- 伞
- 雀
- 赠
- 妞
- 淋
- 刮
- 汁
- 椒
- 埃
- 嚷
- 盲
- 窃
- 辽
- 贱
- 滩
- 昭
- 贯
- 珊
- 涌
- 辨
- 捞
- 仲
- 拘
- 碑
- 侍
- 剿
- 搅
- 狮
- 藤
- 旭
- 翅
- 滨
- 禀
- 遮
- 瑟
- 斩
- 攒
- 犬
- 挫
- 僧
- 吩
- 渊
- 蒂
- 萍
- 庸
- 蓄
- 鼎
- 咪
- 姬
- 溪
- 郡
- 镖
- 怡
- 杉
- 畏
- 瓷
- 枚
- 煎
- 劣
- 饺
- 妄
- 卓
- 蔽
- 蒸
- 垂
- 嘲
- 慨
- 谊
- 蹭
- 逮
- 锐
- 钉
- 舟
- 沃
- 凝
- 翔
- 颈
- 靖
- 灌
- 膊
- 崖
- 娟
- 胳
- 铭
- 灿
- 亭
- 粒
- 卸
- 咕
- 坎
- 攀
- 婿
- 奢
- 茂
- 趴
- 耿
- 捏
- 怖
- 浴
- 婉
- 煌
- 霖
- 揍
- 昂
- 驰
- 壶
- 械
- 卦
- 粥
- 尹
- 瘾
- 雇
- 翰
- 肆
- 寇
- 曦
- 厢
- 杠
- 屠
- 芒
- 谣
- 沫
- 掘
- 酬
- 讼
- 乾
- 玫
- 瑰
- 逊
- 惦
- 儒
- 肾
- 粹
- 愚
- 渔
- 暑
- 伐
- 潇
- 喘
- 敦
- 翁
- 斥
- 帖
- 纱
- 梳
- 缴
- 茅
- 谭
- 氧
- 遣
- 履
- 刹
- 枕
- 婢
- 徽
- 轿
- 寓
- 咽
- 叉
- 嗓
- 捣
- 裹
- 览
- 拯
- 疚
- 蜀
- 丛
- 框
- 斑
- 宵
- 郝
- 蛙
- 熙
- 祁
- 哑
- 葱
- 唇
- 韦
- 媛
- 魄
- 锤
- 绵
- 炫
- 吨
- 稻
- 碌
- 刊
- 漆
- 搏
- 讶
- 痒
- 枫
- 妒
- 冥
- 郊
- 爵
- 逝
- 栽
- 叠
- 蚁
- 裕
- 帕
- 剥
- 谐
- 巫
- 颇
- 娥
- 廊
- 蕾
- 丘
- 丞
- 葡
- 坠
- 鸦
- 糗
- 虐
- 唬
- 屎
- 顽
- 巷
- 硅
- 罕
- 殖
- 嘿
- 韵
- 歧
- 垮
- 淮
- 馈
- 昊
- 宰
- 钦
- 霜
- 兑
- 萄
- 塘
- 胀
- 樱
- 枯
- 咳
- 窑
- 募
- 缸
- 昧
- 仑
- 恕
- 氓
- 叮
- 吼
- 坟
- 轴
- 贞
- 赎
- 帆
- 嫩
- 蚂
- 僵
- 颖
- 噜
- 咒
- 琐
- 勃
- 芯
- 绸
- 哼
- 仨
- 挪
- 狡
- 禅
- 粘
- 雯
- 扒
- 恳
- 蔬
- 匈
- 钓
- 桐
- 菇
- 哒
- 稚
- 膏
- 纲
- 狄
- 硕
- 廉
- 衙
- 艘
- 廖
- 腊
- 蟹
- 邱
- 缉
- 曝
- 桩
- 啤
- 嫉
- 棚
- 矮
- 汰
- 衍
- 拽
- 削
- 彤
- 斜
- 揉
- 樊
- 馨
- 钩
- 浦
- 肢
- 敷
- 喻
- 鞭
- 瞪
- 耕
- 掐
- 屡
- 榴
- 勋
- 泊
- 竭
- 鹤
- 溢
- 淳
- 倩
- 驳
- 抠
- 捅
- 筒
- 窄
- 鄙
- 嗦
- 袍
- 劈
- 炖
- 裸
- 贬
- 敞
- 嘎
- 淹
- 耶
- 秩
- 舱
- 厦
- 叙
- 孽
- 筷
- 浇
- 饥
- 噩
- 蚊
- 兮
- 皱
- 侃
- 辟
- 弊
- 袜
- 吾
- 俘
- 芸
- 夷
- 芦
- 囚
- 倡
- 琦
- 哨
- 巢
- 烛
- 帐
- 燥
- 讽
- 俞
- 馅
- 柿
- 墅
- 妍
- 瘤
- 沦
- 衬
- 瑜
- 蒜
- 蛛
- 窟
- 勿
- 沛
- 磁
- 狭
- 栈
- 懵
- 酿
- 戈
- 邵
- 龚
- 衫
- 勺
- 哗
- 叽
- 畜
- 爪
- 惫
- 颁
- 浸
- 摧
- 勘
- 惕
- 蔓
- 馒
- 挠
- 陀
- 豁
- 帘
- 淀
- 藩
- 蜡
- 凳
- 蘑
- 琼
- 棺
- 蝴
- 骆
- 掰
- 枣
- 遂
- 飙
- 咧
- 掀
- 梨
- 杏
- 嗑
- 棠
- 绽
- 捆
- 舆
- 肇
- 葩
- 呦
- 膝
- 鹊
- 揣
- 瓣
- 靓
- 卵
- 鲍
- 炭
- 戳
- 颤
- 禄
- 菩
- 崛
- 驸
- 佣
- 眨
- 聂
- 乙
- 嘻
- 拧
- 喵
- 佟
- 靳
- 阎
- 拢
- 厘
- 凰
- 疤
- 螺
- 淇
- 涩
- 拎
- 嗨
- 魁
- 薯
- 歼
- 沪
- 筛
- 谍
- 揪
- 刁
- 秃
- 谜
- 撇
- 肪
- 绊
- 逞
- 滥
- 寝
- 麟
- 奕
- 侮
- 喉
- 柄
- 荆
- 撼
- 窦
- 姗
- 乞
- 艇
- 竖
- 剖
- 嗽
- 捂
- 腕
- 鸽
- 刃
- 弓
- 辙
- 粤
- 泣
- 梗
- 茄
- 茜
- 驼
- 冈
- 倔
- 啃
- 蹄
- 唧
- 祈
- 腺
- 焰
- 睿
- 崽
- A
- 苛
- 窍
- 凿
- 倭
- 骤
- 槛
- 碳
- 诏
- 芽
- 浆
- 隶
- 搂
- 睦
- 彬
- 岔
- 诀
- 嚼
- 掺
- 殷
- 吁
- 啰
- 侈
- 亩
- 纤
- 倦
- 揽
- 媚
- 潭
- 莽
- 赃
- 睹
- 脊
- 逍
- 淼
- 沸
- 峡
- 仆
- 眷
- 屯
- 璐
- 雁
- 澄
- 渗
- 咔
- 啸
- 怂
- 娄
- 惶
- 恍
- 锡
- 秉
- 猾
- 挟
- 舔
- 弦
- 阱
- 俭
- 嚣
- 搓
- 懈
- 诡
- 隙
- 苟
- 倘
- 瘫
- 扁
- 鑫
- 撩
- 蓬
- 铲
- 峥
- 巅
- 葫
- 膳
- 狙
- 晏
- 祠
- 峻
- 尉
- 毯
- 沧
- 熏
- 咯
- 株
- 沐
- 奎
- 锣
- 霄
- 彦
- 叭
- 臻
- 昔
- 灶
- 傍
- 腥
- 屑
- 禾
- 彰
- 冉
- 矫
- 滞
- 瘩
- 匀
- 椎
- 槐
- 岚
- 跷
- 剔
- 倪
- 盏
- 泌
- 灸
- 隧
- 函
- 壤
- 剃
- 蹊
- 葵
- 拌
- 琅
- 炳
- 跋
- 瑾
- 哩
- 蔷
- 鳌
- 莺
- 诵
- 疙
- 吱
- 蓓
- 绎
- 匿
- 铮
- 怼
- 踹
- 嗅
- 焚
- 躯
- 蝇
- 橘
- 祟
- 辖
- 砂
- 韧
- 粪
- 诬
- 擒
- 黏
- 衔
- 溺
- 蜘
- 篷
- 贿
- 闫
- 焕
- 邢
- 兹
- 窖
- 旬
- 铸
- 咚
- 惭
- 佬
- 裴
- 裳
- 犀
- 弘
- 莓
- 钏
- 鄂
- 陋
- 伽
- 鞠
- 氪
- 垒
- 窜
- 橙
- 讳
- 甥
- 淫
- 拱
- 袱
- 坨
- 暧
- 渺
- 蕉
- 晗
- 茬
- 盔
- 妓
- 蚕
- 僻
- 朽
- 呛
- 挚
- 擎
- 绅
- 喇
- 鳄
- 巩
- 蜗
- 遛
- 俯
- 汹
- 猩
- 奠
- 钙
- 悍
- 躬
- 菱
- 翘
- 琉
- 虏
- 凄
- 稼
- 炕
- 皂
- 漱
- 斋
- 撂
- 敛
- 阮
- 芭
- 阀
- 缚
- 懦
- 亨
- 螃
- 侥
- 膨
- 筝
- 惟
- 黛
- 眯
- 茨
- 怠
- 辐
- 捎
- 殴
- 桓
- 瞄
- 冀
- 雍
- 霾
- 酵
- 檬
- 哺
- 裔
- 兢
- 麒
- 烹
- 绒
- 丐
- 娅
- 钞
- 垄
- 笛
- 赣
- 蕊
- 暮
- 噪
- 沮
- 肋
- 庇
- 橡
- 摁
- 痘
- 棘
- 拂
- 绷
- 刨
- 晾
- 蹬
- 鸥
- 璇
- 掠
- 瘟
- 俐
- 糙
- 骏
- 牡
- 撵
- 嘘
- 沥
- 庶
- 赁
- 喧
- 涡
- 瞳
- 迭
- 肘
- 颂
- 珑
- 觅
- 埔
- G
- 跤
- 朔
- 詹
- 梭
- 暇
- 惺
- 甸
- 怯
- 聋
- 赦
- 屉
- 闸
- 坝
- 吟
- 凸
- 拴
- 堤
- 矣
- 斧
- 呸
- 啼
- 韬
- 钧
- 坞
- 纺
- 氢
- 嵩
- 镯
- 髓
- 檐
- 涕
- 剁
- 稽
- 烨
- 钮
- 闽
- 仕
- 驯
- 吭
- 漓
- 眸
- 鞅
- 枢
- 煞
- 昕
- 畔
- 疹
- 矶
- 呱
- 熄
- 吏
- 泻
- 拙
- 蛤
- 禽
- 甫
- 厮
- 乍
- 蝉
- 撬
- 嘀
- 衅
- 鲨
- 萱
- 霹
- 旷
- 辫
- 坷
- 眶
- 蟆
- 呜
- 猬
- 嬷
- 萎
- 靶
- 雳
- 煲
- 溯
- 蚀
- 狈
- 滤
- 恙
- 瑛
- 栓
- 嫣
- 碟
- 祷
- 驿
- 犊
- 灼
- 哆
- 宛
- 榨
- 寥
- 翟
- 栗
- 滔
- 馋
- 杖
- 茉
- 饲
- 庐
- 隋
- 旱
- 崎
- 颅
- 焉
- 墩
- 篱
- 晟
- 扳
- 咎
- 竿
- 僚
- 溶
- 俏
- 霆
- 堕
- 冕
- 叩
- 绰
- 洽
- 襄
- 蛊
- 缅
- 侨
- 伶
- 蕴
- 酥
- 坂
- 拇
- 庚
- 卒
- 诛
- 禧
- 瓢
- 锯
- 扉
- 饷
- 诅
- 烘
- 浏
- 痰
- 榆
- 窥
- 鲸
- 捋
- 戎
- 笋
- 璋
- 诫
- 珈
- 癫
- 囤
- 厥
- 癖
- 翩
- 芹
- 匣
- 噬
- 栖
- 蝎
- 锄
- 玺
- 疮
- 缕
- 猥
- 槿
- 蔑
- 汝
- 珂
- 撮
- 坪
- 蒲
- 倚
- 嗷
- 撰
- 荧
- 芙
- 豚
- 筱
- 敖
- 孵
- 猝
- D
- 弈
- 徊
- 辗
- 赘
- 徘
- 烙
- 娲
- 嚎
- 迢
- 绥
- 羁
- 屌
- 铅
- 澎
- S
- 嬛
- 晦
- 煽
- 逾
- 饵
- 虞
- 筐
- 哧
- 抒
- 醇
- 祀
- 瑕
- 岐
- 潼
- 惚
- C
- 苑
- 靡
- 菠
- 赡
- 惰
- 梓
- 铛
- 澈
- 莞
- 呕
- 驭
- 邝
- 砰
- 轼
- 窒
- 慷
- 绞
- 絮
- 虔
- 惮
- 柬
- 嗡
- 拣
- 羲
- 蹋
- 隘
- 帜
- 卤
- 雌
- 唾
- 邹
- 俑
- 碾
- 婪
- 咏
- 粟
- 崭
- 钝
- 彝
- 陡
- 谛
- 秤
- 磅
- 淌
- 炊
- 鲤
- 羹
- 殉
- 曰
- 萤
- 阐
- 鬟
- 拭
- T
- 沁
- 滇
- 梧
- 烁
- 瞻
- 淤
- 凹
- 撸
- 棕
- 腌
- 缪
- 祺
- 痊
- 忑
- 柠
- 矜
- 忐
- 讹
- 瀚
- 尧
- 昼
- 芊
- 憨
- 鳞
- 匮
- 鸳
- 鸯
- 湃
- 屿
- 馍
- 沽
- 栾
- 蝠
- 窘
- 绛
- 巍
- 悯
- 焊
- 谴
- 浊
- 娴
- 畴
- 湛
- 螂
- 韭
- 哮
- 拷
- 攥
- 凛
- 颓
- 恺
- 蝙
- 襟
- 粑
- 洼
- 笃
- 渝
- 骁
- 殃
- 酌
- 乒
- 臊
- 疵
- 诧
- 谬
- 锈
- 袄
- 膛
- 瘸
- 嫖
- 梢
- 沼
- 棱
- 嚓
- 耸
- 喳
- 舵
- 橱
- 涮
- 檀
- 瞩
- 腑
- 岑
- 痪
- 墟
- 蔚
- 捍
- 徙
- 棣
- 猖
- 掷
- 恬
- 嫦
- 噔
- 饪
- 掂
- 恤
- 叱
- 芷
- 弩
- 楷
- 镶
- 茧
- 诠
- 咙
- 匡
- 擂
- 亵
- 杞
- 乓
- 渤
- 藉
- 憔
- 渭
- 禹
- 睐
- 趾
- 抉
- 悴
- 忒
- 茸
- 纬
- 懊
- 浚
- 溅
- 遏
- 琛
- 靴
- 戮
- 翎
- 谕
- 濒
- 锵
- 嬉
- 籽
- 殆
- 叼
- 苔
- 灏
- 嗖
- 俪
- 亢
- 冶
- 嗜
- 磋
- 汀
- 讪
- 萃
- 菁
- 镑
- 紊
- 脯
- 缆
- 哉
- 赂
- 婊
- B
- 蕃
- 迄
- 蜓
- 舜
- 嚏
- 昱
- 黔
- 犟
- 汐
- 昵
- 嗣
- 唆
- 蛾
- 黯
- 绯
- 瀑
- 憬
- 狩
- 掖
- 崴
- 褪
- 髦
- 酝
- 弧
- 咄
- 吝
- 馄
- 娩
- 窿
- 蜻
- 袒
- 玮
- 阙
- 篡
- 邯
- 朦
- 邑
- 喃
- 粽
- 捶
- 嫔
- 钗
- 穗
- 骼
- 胭
- 寐
- 噎
- M
- 碱
- 荤
- 笙
- 矢
- 芥
- 廓
- 扼
- 厄
- 毋
- 糯
- 惋
- 纶
- 碜
- 胧
- 懿
- 偃
- 沏
- 痹
- 慑
- 鹦
- 娠
- 铐
- 绢
- 傀
- 孜
- 饨
- 儡
- 孰
- 焱
- 峭
- 伎
- 幌
- 椰
- 譬
- 藕
- 坍
- 铝
- 鞍
- 蘸
- 貂
- 猿
- 炙
- 琊
- 峙
- 硝
- 幂
- 钰
- 眩
- 亥
- 簇
- 鹉
- 睫
- 斟
- 簧
- 颐
- 薰
- 癞
- 祛
- 燎
- 缎
- 簸
- 咣
- 绚
- 簿
- 邋
- 嵌
- 肮
- 稷
- 辍
- 闵
- 枸
- 撅
- 曙
- 苇
- K
- 悼
- 汶
- 匕
- 皖
- 腮
- 琶
- 汲
- 鼹
- 礁
- 颊
- 怔
- 汕
- 喀
- 砌
- 釜
- 畸
- 鹃
- 峨
- 奄
- 骡
- 斐
- 芈
- 莘
- 蟑
- 荔
- 缇
- 犒
- 宓
- 汾
- 沌
- 宦
- 憧
- 咤
- 吆
- 攘
- 漩
- 梵
- 阂
- 吒
- 芜
- 缔
- 秧
- 翊
- 晌
- 剐
- 蜕
- 芋
- 彷
- 牟
- 诲
- 臀
- 徨
- Q
- 杵
- 荫
- 榄
- 蹿
- 豌
- 迂
- 琵
- 拗
- 帷
- 楞
- 嘶
- 橄
- 胺
- 圭
- 砚
- 藻
- 凋
- 啄
- 褒
- 嗝
- 殡
- 嫡
- 恃
- 濡
- 缜
- 孺
- 泸
- 妊
- 衩
- 驹
- 榻
- 腆
- 鹂
- 箍
- 璧
- 熔
- 悚
- 遢
- 弛
- 诋
- 羚
- 鹭
- 嘚
- 骸
- 瘪
- 铠
- 瞿
- 屹
- 邸
- 痨
- 辘
- 浒
- 忏
- 钊
- 潦
- 怅
- 肴
- 蚯
- 胚
- 茵
- 蚓
- 戬
- 瘀
- 翡
- 恪
- 卉
- 蝌
- 雏
- 祯
- 谏
- 蚪
- 钵
- 馊
- 嗒
- 犁
- 寅
- V
- 锥
- 娼
- 晖
- 啬
- 纣
- 淆
- 丙
- 夯
- 竣
- 褚
- 褥
- 轧
- 氨
- 褂
- 钳
- 轲
- 竺
- 疡
- 淞
- 胤
- 摹
- 鳅
- 珀
- 偕
- 匾
- 觑
- 扈
- 傣
- 绫
- 枷
- 阑
- 柚
- 烊
- 怦
- 腼
- 珺
- 缀
- 裘
- 碉
- 峪
- 俸
- 羯
- 姊
- 疟
- 砺
- 盎
- 嘣
- 釉
- 溥
- 熠
- 垢
- 摞
- 哽
- 槟
- 囧
- 胰
- 遁
- 痞
- 熹
- 忡
- 稠
- 顷
- 瑚
- 卯
- 渎
- 炅
- 褶
- 烽
- 瞑
- 嘈
- 硫
- 壹
- 悖
- 酪
- 跺
- 阜
- 帛
- 漪
- 蝗
- 迦
- 蟒
- 咀
- 谤
- 睬
- 辕
- 绮
- 搀
- 裆
- 鳖
- 囡
- 羔
- 痣
- 滕
- 佘
- 樟
- 韶
- 霓
- 劾
- 赈
- 唏
- 闰
- 脐
- 沓
- 瓮
- 篓
- 笠
- 暄
- 涅
- 诽
- 洱
- 栅
- 蚱
- 囔
- 攸
- 酣
- 阪
- 榕
- 骇
- 婧
- 陨
- 憎
- 沂
- 磷
- 壕
- 醺
- 惬
- 璀
- 璨
- 喋
- P
- 炽
- 瘁
- 羿
- 褐
- 簪
- 冽
- 驮
- 芮
- 辄
- 咆
- 渍
- 觐
- 炷
- 蛰
- 驷
- 帚
- 蜷
- O
- X
- 邂
- 逅
- 缭
- 秽
- 琰
- 龌
- 龊
- 俨
- 涟
- 噼
- 掇
- 哔
- 炬
- 佯
- 粱
- 霁
- 鱿
- 夭
- 擀
- 陇
- 瞥
- 壑
- 盹
- 馁
- 蚌
- 焖
- 蛟
- 囱
- 蚝
- 抿
- 脓
- 蒿
- 飓
- 渲
- 宸
- 酗
- 荻
- 缥
- 弑
- 偎
- 宕
- 耘
- 瞌
- 瘴
- 溉
- 涝
- 咿
- 垛
- 垦
- 缈
- 苞
- 惆
- 汛
- 鹑
- 町
- 抡
- 慵
- 浣
- 耙
- 砥
- 噱
- 孬
- 札
- 弼
- 酋
- 镳
- 萦
- 泾
- 挞
- 钾
- 讷
- 圃
- 舶
- 穹
- 戾
- 汴
- 锂
- 昀
- 镀
- 眺
- 捺
- 猕
- 阚
- 骋
- 悸
- 蜚
- 咩
- 讥
- 篆
- 鸠
- 哐
- 锚
- 幢
- 翱
- 螳
- 徇
- 踞
- 蔗
- 蔼
- 漉
- 衲
- N
- 漳
- 枭
- 漾
- 歆
- 烬
- 曳
- 岌
- 孚
- 戛
- 呲
- 箫
- 娓
- 桨
- 涓
- 獭
- 芃
- 摒
- 戍
- 踝
- 轱
- 沱
- 锢
- 堰
- 抨
- 昙
- 鹌
- 蔻
- 迸
- 泯
- 龈
- 痔
- 骛
- 淄
- 泵
- 烯
- 蔫
- F
- 胥
- 忱
- 纫
- 搪
- 茎
- 暨
- 泞
- 踵
- 璞
- 佗
- 荃
- 鬓
- 蚣
- 罔
- 臆
- 贻
- 橇
- 麓
- 槌
- 琥
- I
- 纥
- 薅
- 樵
- 苓
- 熨
- 钨
- 骞
- 诣
- 涤
- 踊
- 醛
- 碴
- 蹴
- 缤
- 赊
- 岖
- 戊
- 禺
- 坯
- 戟
- 楂
- 隅
- 酶
- 邃
- 蛀
- 皎
- 炯
- 垣
- 锹
- 镰
- 夙
- 甬
- 叵
- 茁
- 珞
- 妲
- 涸
- 兀
- 嘤
- 谙
- 噗
- 榔
- 稣
- 剽
- 奚
- 啕
- 袅
- 讧
- 钠
- 怄
- 晤
- 肛
- 氰
- 迥
- 唰
- 诩
- 籁
- 砒
- 谩
- 诟
- 斓
- 泷
- 幡
- 爻
- 痫
- 眈
- 漕
- 惘
- 挎
- 噶
- 喱
- 氯
- U
- 跆
- 嗤
- 锏
- 睽
- 缮
- 蟋
- 蠕
- 扪
- 狞
- 飒
- 吮
- 弋
- 奘
- 蟠
- 梆
- 拈
- 帧
- 蟀
- 胯
- 掳
- 蝈
- 帼
- 瞰
- 嵇
- 阉
- 篝
- 笆
- 亘
- L
- 喔
- 愕
- 谚
- 轶
- 岱
- 丕
- 婕
- 羌
- 毡
- 呻
- 鼾
- 蜥
- 偌
- 庵
- 敝
- 蛐
- 麝
- 鞘
- 拮
- 涣
- 葆
- 雹
- 踌
- 蜈
- 馥
- 跻
- 狰
- 桀
- 毗
- 皿
- 缨
- 磐
- 啾
- 牒
- 缰
- 躇
- 踮
- 糠
- 嗲
- 刽
- 咫
- 殇
- 瀛
- 胱
- 炀
- 虱
- 砾
- 獒
- 涎
- 袤
- 鄱
- 瓯
- 锭
- 塾
- 蹉
- 珏
- 豺
- 锌
- 蜿
- 牦
- 瓒
- 莆
- 蜴
- 氮
- 跎
- 咛
- 骜
- 郸
- 搐
- 堑
- 涞
- 寰
- 跛
- 鸵
- 毂
- 妩
- 铤
- 薏
- 烩
- 遐
- 煦
- 仃
- 髅
- 酮
- 榷
- 腋
- 珩
- 臃
- 愫
- 蜒
- 荼
- 侬
- 淬
- 婵
- 偻
- 焯
- 骊
- 恻
- 濮
- 泱
- 庖
- 惴
- 鲫
- 硌
- 肓
- 芪
- 礴
- 磺
- 腱
- 冢
- 谪
- 骷
- 哏
- 腩
- 蓦
- 焙
- 桢
- 阖
- 睾
- 疱
- 郴
- 铿
- 铡
- 祉
- 跄
- 桦
- 椭
- 拄
- 皙
- 膈
- 裱
- 髋
- 伢
- 罹
- 鳍
- 赝
- 嬴
- 痤
- 藿
- 镐
- 铎
- 瘠
- 簌
- 杳
- 铢
- 阡
- 忤
- 舀
- 悻
- 媲
- 茗
- 湍
- 舫
- 瘙
- 瞟
- 擞
- 荀
- 刍
- J
- 潍
- 莴
- 斛
- 郦
- 栩
- 绾
- 蕙
- 黜
- 湄
- 藓
- 躏
- 锱
- 捻
- 佼
- 砝
- E
- 罡
- 忻
- 鹜
- 滟
- 傥
- 蛳
- W
- 铀
- 魇
- 觎
- 蹂
- 佞
- 诃
- 灞
- 镣
- 痱
- 侏
- 峦
- 榛
- 饽
- 龋
- 嗔
- 芍
- 椿
- 璎
- 渥
- 蟾
- 骰
- 吠
- 挛
- 倜
- 鳝
- 糜
- 噢
- 黝
- 藐
- 绡
- 掣
- 鳗
- 璜
- 犷
- 痉
- 膺
- 罄
- 阄
- 纨
- 纭
- 彗
- 嵘
- 埠
- 潢
- 桔
- 耷
- 逵
- 诓
- 怵
- 蚤
- 苯
- 邈
- 谑
- 颌
- 珐
- 踱
- 髻
- 倏
- 啷
- 篑
- 冗
- 蹶
- 荥
- 涧
- 镂
- 踉
- 呷
- 衢
- 荟
- 箴
- 桧
- 恿
- 坳
- 瑙
- 珅
- 莅
- 膘
- 宥
- 氟
- 秆
- 诙
- 蹑
- 茴
- 翳
- 渚
- H
- 唁
- 诿
- 窈
- 窕
- 膻
- 荨
- 蛔
- 筵
- 钛
- 獾
- 琏
- 箩
- 栀
- 隼
- 煸
- 罂
- 蛎
- 咂
- 谗
- 颦
- 佝
- 苣
- 搡
- 仄
- 垠
- 濂
- 泗
- 亟
- 蔺
- 蛆
- 霏
- 榈
- 裟
- 瑁
- 酚
- 蝼
- 怆
- 犄
- 沣
- 揖
- 斡
- 刎
- 鲟
- 峒
- 瞭
- 晁
- 袈
- 蓟
- 镁
- 骥
- 掸
- 玳
- 娑
- 馀
- 跚
- 槃
- 缄
- 猢
- 粕
- 隍
- 佃
- 獗
- 唢
- 菏
- 酰
- 腚
- 笈
- 哙
- 孢
- 飕
- 嘹
- 茱
- 蹒
- 殓
- 柩
- 谀
- 姣
- 戌
- 柑
- 粼
- 淅
- 啧
- 盅
- 鼬
- 啜
- 绉
- 咻
- 锲
- 铆
- Y
- 螨
- 茯
- 憩
- 臼
- 谄
- 讴
- 濠
- 雎
- 噻
- 淦
- 懋
- 尕
- 氦
- 褛
- 颉
- 喆
- 铬
- 褴
- 燮
- 銮
- 侗
- 蹙
- 煜
- 邺
- 锃
- 麋
- 矗
- 娆
- 匐
- 噌
- 潸
- 碘
- 浔
- 檄
- 皈
- 铂
- 遨
- 炜
- 曜
- 饴
- 舷
- 胫
- 叟
- 祎
- 沅
- 潺
- 楣
- 埂
- 瞠
- 幔
- 稞
- 抻
- 匝
- 幄
- 殒
- 瑭
- 袂
- 囫
- 瓴
- 攫
- 鲈
- 箔
- 哝
- 馗
- 蜍
- 痧
- 脘
- 姘
- 苒
- 缢
- 觞
- 蛹
- 饬
- 胄
- 筏
- 鸾
- 儆
- 痿
- 矬
- 酊
- 纾
- 铖
- 荏
- 掬
- 膑
- 贮
- 觊
- 囵
- 泓
- 搔
- 汞
- 蚩
- 婀
- 谧
- 恣
- 霎
- 饕
- 赅
- 鲶
- 梏
- 獠
- 俶
- 龛
- 桅
- 鹄
- 旌
- 鲲
- 姒
- 蠡
- 繇
- 祜
- 诨
- 汩
- 觥
- 孀
- R
- 谥
- 蕨
- 祐
- 榭
- 皑
- 纂
- 獐
- 覃
- 痂
- 孑
- 砧
- 圩
- 桎
- 啵
- 葚
- 嗫
- 浃
- 荠
- 阈
- 遴
- 枇
- 狒
- 秸
- 筠
- 硒
- 卞
- 玷
- 杈
- 狲
- 忿
- 俎
- 拚
- 颍
- 睢
- 颧
- 滦
- 霭
- 雉
- 毽
- 蓑
- 歙
- 鳃
- 鹬
- 墉
- 楔
- 舐
- 绔
- 弭
- 馏
- 挝
- 奂
- 嘭
- 忪
- 箕
- 诌
- 谒
- 颚
- 滂
- 醍
- 洵
- 鹫
- 虢
- 苋
- 玥
- 臾
- 蹩
- Z
- 杷
- 痍
- 酉
- 疸
- 鄢
- 垩
- 烷
- 湮
- 钎
- 樽
- 旮
- 葭
- 邬
- 缱
- 糍
- 亳
- 咦
- 苷
- 伉
- 隽
- 伫
- 聒
- 匍
- 飚
- 桠
- 睑
- 脍
- 焘
- 谶
- 赳
- 萸
- 讣
- 疽
- 臧
- 巽
- 毓
- 鸢
- 纰
- 啐
- 噙
- 舛
- 敕
- 醐
- 痢
- 嚯
- 婺
- 勖
- 岷
- 溧
- 骅
- 犸
- 麾
- 嗟
- 诘
- 懑
- 貔
- 貅
- 啉
- 崂
- 鸩
- 镭
- 绻
- 逑
- 煨
- 褓
- 姝
- 藜
- 溟
- 儋
- 谡
- 欸
- 郢
- 荚
- 疝
- 遽
- 陂
- 饯
- 孪
- 巳
- 荞
- 泔
- 岿
- 谆
- 镍
- 洙
- 佻
- 盂
- 睨
- 铄
- 餮
- 酯
- 癣
- 浜
- 酩
- 焗
- 挲
- 鬃
- 鲠
- 仞
- 诰
- 谔
- 胛
- 萼
- 涿
- 莠
- 珲
- 旯
- 蜢
- 黍
- 肽
- 涪
- 髡
- 氙
- 陉
- 鬶
- 侩
- 糅
- 氤
- 芾
- 砷
- 鳕
- 钣
- 锒
- 闱
- 铵
- 镊
- 玑
- 砀
- 癜
- 颔
- 楹
- 螈
- 醚
- 琮
- 铩
- 笄
- 瓤
- 裨
- 潋
- 悌
- 聿
- 祢
- 郜
- 汨
- 棂
- 氲
- 嶙
- 聩
- 菅
- 腧
- 妯
- 龇
- 谲
- 耄
- 耋
- 囿
- 黢
- 揄
- 鲇
- 仝
- 個
- 忖
- 峋
- 揶
- 迩
- 诳
- 踽
- 骐
- 趸
- 颞
- 撺
- 辇
- 猷
- 铉
- 羸
- 徜
- 徉
- 襁
- 镌
- 孱
- 钒
- 铣
- 呤
- 遑
- 俾
- 皋
- 笕
- 笺
- 趔
- 趄
- 辋
- 鄞
- 殚
- 岫
- 跬
- 嘌
- 苻
- 绶
- 郅
- 瑄
- 萋
- 蘼
- 湎
- 砣
- 钜
- 捭
- 喹
- 恹
- 娌
- 螯
- 锰
- 祚
- 阆
- 矾
- 厩
- 龅
- 炝
- 黠
- 妁
- 濑
- 鞑
- 柒
- 滁
- 淖
- 鸬
- 鬣
- 晔
- 恸
- 赓
- 侉
- 溏
- 還
- 珮
- 鸨
- 嚅
- 笤
- 靥
- 啮
- 滓
- 俚
- 唳
- 苜
- 蓿
- 鹚
- 耦
- 莜
- 麸
- 粳
- 綦
- 盱
- 噤
- 遒
- 玟
- 魍
- 魉
- 旖
- 栉
- 锷
- 醴
- 泮
- 恁
- 甾
- 琬
- 丶
- 擤
- 桉
- 踟
- 誊
- 谟
- 澧
- 玖
- 畿
- 顼
- 兖
- 贰
- 茏
- 愎
- 豇
- 旎
- 蹰
- 蜃
- 屐
- 芡
- 鎏
- 癸
- 卅
- 枥
- 陟
- 琨
- 粝
- 掮
- 妪
- 姹
- 鏖
- 捯
- 钴
- 竽
- 恽
- 佰
- 胗
- 崧
- 磴
- 绺
- 鳏
- 槁
- 啖
- 矍
- 徕
- 忾
- 烃
- 喏
- 囹
- 圄
- 砭
- 邕
- 犍
- 鸮
- 剜
- 琚
- 瘢
- 魑
- 眦
- 锉
- 柘
- 痦
- 苕
- 牯
- 湟
- 厝
- 濛
- 赭
- 馐
- 蜇
- 嶂
- 贲
- 靼
- 臬
- 陲
- 潞
- 芩
- 腓
- 锨
- 寮
- 於
- 洇
- 愠
- 疖
- 鹧
- 鸪
- 茕
- 戕
- 壬
- 庾
- 莒
- 鹈
- 鹕
- 蠹
- 勐
- 疥
- 辎
- 耒
- 嗬
- 沔
- 睥
- 邙
- 篾
- 揩
- 肱
- 胍
- 磬
- 菟
- 豢
- 垓
- 唑
- 剌
- 阗
- 汜
- 佤
- 璟
- 麽
- 鬻
- 怏
- 蕤
- 茭
- 睚
- 淙
- 牍
- 榫
- 濯
- 稹
- 媾
- 悱
- 骶
- 蛭
- 鞣
- 椁
- 槊
- 擢
- 滢
- 佚
- 菡
- 沭
- 扦
- 镆
- 闾
- 缛
- 窠
- 疣
- 骠
- 俅
- 喙
- 蹼
- 硼
- 黩
- 腴
- 醮
- 邛
- 漯
- 豉
- 昶
- 刿
- 凇
- 鲅
- 舸
- 邳
- 俟
- 铰
- 翌
- 鳟
- 葳
- 寤
- 碣
- 秭
- 揠
- 熵
- 燧
- 靛
- 嵊
- 窨
- 鹗
- 芎
- 颢
- 佶
- 骢
- 圜
- 岘
- 燊
- 壅
- 畲
- 萘
- 煊
- 粲
- 倌
- 嗳
- 橹
- 椽
- 夔
- 鲑
- 赧
- 殄
- 沆
- 瀣
- 廪
- 舢
- 狍
- 挈
- 鹳
- 蚜
- 彧
- 羟
- 盥
- 镛
- 痈
- 蜊
- 皲
- 篦
- 喑
- 鲢
- 邡
- 蕲
- 僳
- 秣
- 蛉
- 讫
- 祗
- 鹩
- 撷
- 狎
- 郓
- 镕
- 榉
- 鲷
- 娣
- 淝
- 桷
- 镉
- 郫
- 髌
- 醪
- 僭
- 伧
- 嵬
- 苁
- 鹘
- 徭
- 歃
- 阕
- 鸱
- 貉
- 闳
- 坻
- 缙
- 媪
- 莨
- 菪
- 绦
- 恫
- 崆
- 喟
- 葺
- 逶
- 迤
- 骈
- 馔
- 苎
- 溘
- 垭
- 樯
- 诤
- 魃
- 搽
- 绀
- 蚴
- 澶
- 蒺
- 罘
- 眙
- 怍
- 來
- 荪
- 贶
- 亓
- 唻
- 畈
- 谌
- 芨
- 鲀
- 窸
- 窣
- 荜
- 楫
- 衮
- 趵
- 勰
- 髯
- 椴
- 缶
- 荸
- 秫
- 菖
- 甙
- 翦
- 椟
- 峤
- 掼
- 謇
- 洄
- 鄯
- 妗
- 浐
- 颀
- 箸
- 畦
- 痼
- 橛
- 鲛
- 蝾
- 愍
- 蒹
- 嘁
- 韪
- 劭
- 垅
- 暹
- 僮
- 稗
- 筚
- 煅
- 嬅
- 蜉
- 骝
- 碚
- 冼
- 吶
- 洹
- 郧
- 炴
- 绌
- 泠
- 呓
- 簋
- 溴
- 篁
- 仟
- 锟
- 羧
- 鹞
- 嘬
- 渌
- 笸
- 霰
- 稔
- 钡
- 齁
- 胪
- 衾
- 尻
- 洮
- 蘅
- 鲳
- 殂
- 腭
- 涔
- 蝣
- 孳
- 澍
- 钼
- 蒡
- 枳
- 渑
- 茼
- 馕
- 埙
- 珣
- 菘
- 邰
- 樾
- 铱
- 鳐
- 唔
- 篙
- 箜
- 篌
- 耆
- 啫
- 枞
- 杼
- 嵋
- 舂
- 娉
- 铨
- 崃
- 笳
- 邗
- 逡
- 僖
- 泫
- 疴
- 捱
- 醅
- 堇
- 肄
- 荇
- 虬
- 谯
- 酞
- 桡
- 艮
- 膦
- 艹
- 啻
- 滏
- 茆
- 圪
- 磡
- 麼
- 闼
- 郯
- 仡
- 氐
- 贽
- 俦
- 蓖
- 跹
- 帏
- 氅
- 趿
- 暝
- 缟
- 棹
- 滹
- 毖
- 蝰
- 虻
- 缫
- 诮
- 闩
- ○
- 潴
- 樨
- 瘘
- 襦
- 妤
- 郾
- 衿
- 鸷
- 旰
- 镢
- 傈
- 倨
- 笏
- 蒽
- 醌
- 驽
- 浠
- 涠
- 蓁
- 柞
- 钺
- 蜮
- 诂
- 徵
- 锆
- 椋
- 叻
- 廿
- 藁
- 乜
- 摈
- 這
- 茌
- 辊
- 岬
- 郇
- 杓
- 轳
- 酎
- 蟥
- 時
- 镒
- 蚬
- 澹
- 赟
- 後
- 怿
- 箐
- 囍
- 揆
- 蹁
- 鬄
- 苫
- 蕖
- 卺
- 辔
- 偈
- 俳
- 吲
- 哚
- 瘆
- 蕞
- 笞
- 氩
- 嫘
- 墁
- 帔
- 褡
- 裢
- 乩
- 褊
- 颏
- 喒
- 錾
- 皌
- 戗
- 唪
- 啭
- 伥
- 茔
- 斫
- 齉
- 仵
- 赉
- 吡
- 啶
- 蹇
- 螅
- 汊
- 湓
- 凫
- 珙
- 腈
- 洌
- Ω
- 憷
- 跶
- 抔
- 濞
- 崤
- 殍
- 浥
- 铳
- 酽
- 馑
- 髂
- 隗
- 韫
- 晷
- 诒
- 埭
- 鹪
- 蕻
- 昃
- 瓠
- 萁
- 癔
- 怩
- 疳
- 跖
- 疔
- 簟
- 汆
- 疠
- 卟
- 墒
- 穰
- 铍
- 珥
- 钤
- 隻
- 樓
- 墎
- 鳜
- 沒
- 岀
- 杪
- 単
- 鲧
- 呋
- 彀
- 祇
- 豸
- 胴
- 唷
- 丨
- 燚
- 麴
- 觇
- 缑
- 橐
- 蚡
- 朊
- 俣
- 垡
- <sos/eos>
init: null
input_size: null
ctc_conf:
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
use_preprocessor_valid: false
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_utt_prefix: null
rir_apply_prob: 1.0
noise_scp: null
noise_utt_prefix: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_zh_char/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
macaron_style: true
use_cnn_module: true
cnn_module_kernel: 15
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
required:
- output_dir
- token_list
version: 0.10.2a1
distributed: true
```
</details>
## LM config
<details><summary>expand</summary>
```
NONE
```
</details>
|
Tuana/eigenfaces-sklearn-lfw | Tuana | 2021-10-27T01:53:23Z | 0 | 1 | null | [
"joblib",
"region:us"
] | null | 2022-03-02T23:29:05Z | # Model to Recognize Faces using eigenfaces and scikit-learn
Simple model that was trained on a preprocessed excerpt of the “Labeled Faces in the Wild”, aka [LFW](http://vis-www.cs.umass.edu/lfw/)
This demo was taken from [Scikit-learn](https://scikit-learn.org/stable/auto_examples/applications/plot_face_recognition.html)
The dataset includes 7 classes (individuals):
 |
chandank/bart-base-finetuned-kagglenews-entityfiltering | chandank | 2021-10-27T01:06:10Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-finetuned-kagglenews-entityfiltering
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kagglenews-entityfiltering
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5703
- Rouge1: 28.2719
- Rouge2: 15.6883
- Rougel: 24.0674
- Rougelsum: 25.616
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.9187 | 1.0 | 863 | 1.5703 | 28.2719 | 15.6883 | 24.0674 | 25.616 | 20.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.14.0
- Tokenizers 0.10.3
|
pritoms/gpt2-finetuned-python2 | pritoms | 2021-10-26T23:15:08Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-finetuned-python2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-python2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 25 | 2.0135 |
| No log | 2.0 | 50 | 1.9618 |
| No log | 3.0 | 75 | 1.9454 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
chaitanya97/german_pretrained | chaitanya97 | 2021-10-26T13:35:37Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: german_pretrained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# german_pretrained
This model is a fine-tuned version of [flozi00/wav2vec-xlsr-german](https://huggingface.co/flozi00/wav2vec-xlsr-german) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9812
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 12.5229 | 5.0 | 5 | 12.9520 | 1.0 |
| 4.3782 | 10.0 | 10 | 5.5689 | 1.0 |
| 2.56 | 15.0 | 15 | 4.8410 | 1.0 |
| 2.2895 | 20.0 | 20 | 4.0380 | 1.0 |
| 1.872 | 25.0 | 25 | 3.9558 | 1.0 |
| 1.6992 | 30.0 | 30 | 3.9812 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
chaitanya97/german_trained | chaitanya97 | 2021-10-26T12:37:19Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: german_trained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# german_trained
This model is a fine-tuned version of [flozi00/wav2vec-xlsr-german](https://huggingface.co/flozi00/wav2vec-xlsr-german) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9367
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 12.0352 | 5.0 | 5 | 12.6165 | 1.0 |
| 4.0249 | 10.0 | 10 | 6.6453 | 1.0 |
| 2.6661 | 15.0 | 15 | 5.7873 | 1.0 |
| 2.4123 | 20.0 | 20 | 4.3250 | 1.0 |
| 1.9481 | 25.0 | 25 | 3.9899 | 1.0 |
| 1.7533 | 30.0 | 30 | 3.9367 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Jihyun22/bert-base-finetuned-nli | Jihyun22 | 2021-10-26T11:07:39Z | 17 | 3 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- accuracy
model_index:
- name: bert-base-finetuned-nli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: nli
metric:
name: Accuracy
type: accuracy
value: 0.756
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-nli
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1357
- Accuracy: 0.756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 196 | 0.7357 | 0.156 |
| No log | 2.0 | 392 | 0.5952 | 0.0993 |
| 0.543 | 3.0 | 588 | 0.5630 | 0.099 |
| 0.543 | 4.0 | 784 | 0.5670 | 0.079 |
| 0.543 | 5.0 | 980 | 0.5795 | 0.078 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
AyushPJ/ai-club-inductions-21-nlp-ELECTRA-base-squad | AyushPJ | 2021-10-26T10:41:20Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
model-index:
- name: ai-club-inductions-21-nlp-ELECTRA-base-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-club-inductions-21-nlp-ELECTRA-base-squad
This model is the deepset/electra-base-squad2 pre-trained model trained on data from AI Inductions 21 NLP competition (https://www.kaggle.com/c/ai-inductions-21-nlp) for extractive QA.
## Model description
More information needed
## Intended uses & limitations
AI Inductions 21 NLP competition
## Training and evaluation data
AI Inductions 21 NLP competition data
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- max_length = 512
- doc_stride = 384
- learning_rate: 2e-05
- weight_decay=0.01
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.11.3
- Pytorch 1.7.1+cpu
- Datasets 1.14.0
- Tokenizers 0.10.3
|
mujerry/bert-base-uncased-finetuned-QnA-v1 | mujerry | 2021-10-26T09:19:02Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-QnA-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-QnA-v1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7610
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 39 | 3.3668 |
| No log | 2.0 | 78 | 3.2134 |
| No log | 3.0 | 117 | 3.1685 |
| No log | 4.0 | 156 | 3.1042 |
| No log | 5.0 | 195 | 3.1136 |
| No log | 6.0 | 234 | 2.9051 |
| No log | 7.0 | 273 | 2.9077 |
| No log | 8.0 | 312 | 2.9774 |
| No log | 9.0 | 351 | 2.9321 |
| No log | 10.0 | 390 | 2.9501 |
| No log | 11.0 | 429 | 2.8544 |
| No log | 12.0 | 468 | 2.8761 |
| 3.0255 | 13.0 | 507 | 2.8152 |
| 3.0255 | 14.0 | 546 | 2.8046 |
| 3.0255 | 15.0 | 585 | 2.6979 |
| 3.0255 | 16.0 | 624 | 2.6379 |
| 3.0255 | 17.0 | 663 | 2.7091 |
| 3.0255 | 18.0 | 702 | 2.6914 |
| 3.0255 | 19.0 | 741 | 2.7403 |
| 3.0255 | 20.0 | 780 | 2.7479 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
owen99630/catexp2 | owen99630 | 2021-10-26T04:58:10Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | {0: 'Anorexia',
1: 'Anxiety',
2: 'Bullying',
3: 'Care',
4: 'Creativity',
5: 'Culture',
6: 'Depression',
7: 'Friends',
8: 'Getting help',
9: 'Happiness',
10: 'Helping others',
11: 'Helping yourself',
12: 'Hope',
13: 'Learning',
14: 'Life Issues',
15: 'Mental Health',
16: 'Mental Health Matters',
17: 'Mental health awareness',
18: 'PTSD',
19: 'Positivity',
20: 'Resilience',
21: 'Self-care',
22: 'Sharing',
23: 'Support',
24: 'University'} |
huggingtweets/theonion | huggingtweets | 2021-10-26T04:42:42Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/theonion/1635223358201/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/875392068125769732/yrN-1k0Y_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Onion</div>
<div style="text-align: center; font-size: 14px;">@theonion</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from The Onion.
| Data | The Onion |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 2 |
| Short tweets | 10 |
| Tweets kept | 3238 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/tl5cqc3f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @theonion's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1y8p1w9v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1y8p1w9v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/theonion')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AndreLiu1225/t5-news | AndreLiu1225 | 2021-10-26T02:49:39Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | This is a pretrained model that was loaded from t5-base. It has been adapted and changed by changing the max_length and summary_length. |
kornesh/xlm-roberta-base | kornesh | 2021-10-26T01:25:22Z | 146 | 1 | transformers | [
"transformers",
"tf",
"xlm-roberta",
"feature-extraction",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | Converted for Tensorflow
```
!pip install transformers sentencepiece
from transformers import TFAutoModel, AutoTokenizer
name = "xlm-roberta-base"
model = TFAutoModel.from_pretrained(name, from_pt=True)
tokenizer = AutoTokenizer.from_pretrained(name)
model.save_pretrained("local-xlm-roberta-base")
tokenizer.save_pretrained("local-xlm-roberta-base")
``` |
espnet/siddhana_fsc_asr_train_asr_hubert_transformer_adam_specaug_raw_en_word_valid.acc.ave_5best | espnet | 2021-10-25T23:21:36Z | 0 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:fsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- fsc
license: cc-by-4.0
---
## ESPnet2 SLU pretrained model
### `siddhana/fsc_asr_train_asr_hubert_transformer_adam_specaug_raw_en_word_valid.acc.ave_5best`
♻️ Imported from https://zenodo.org/record/5590204
This model was trained by siddhana using fsc/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
danielvasic/en_acnl_electra_pipeline | danielvasic | 2021-10-25T18:45:15Z | 4 | 0 | spacy | [
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_acnl_electra_pipeline
results:
- task:
name: POS
type: token-classification
metrics:
- name: POS Accuracy
type: accuracy
value: 0.9769257272
- task:
name: SENTER
type: token-classification
metrics:
- name: SENTER Precision
type: precision
value: 0.9508884151
- name: SENTER Recall
type: recall
value: 0.94805839
- name: SENTER F Score
type: f_score
value: 0.9494712937
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Dependencies Accuracy
type: accuracy
value: 0.9577103137
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Dependencies Accuracy
type: accuracy
value: 0.9577103137
---
| Feature | Description |
| --- | --- |
| **Name** | `en_acnl_electra_pipeline` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.1.3,<3.2.0` |
| **Default Pipeline** | `transformer`, `tagger`, `parser` |
| **Components** | `transformer`, `tagger`, `parser` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | GPL |
| **Author** | Daniel Vasić() |
### Label Scheme
<details>
<summary>View label scheme (87 labels for 2 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `VERB`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` |
| **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `dative`, `dep`, `det`, `dobj`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nummod`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TAG_ACC` | 97.69 |
| `DEP_UAS` | 95.77 |
| `DEP_LAS` | 94.52 |
| `SENTS_P` | 95.09 |
| `SENTS_R` | 94.81 |
| `SENTS_F` | 94.95 |
| `TRANSFORMER_LOSS` | 6123357.72 |
| `TAGGER_LOSS` | 338995.26 |
| `PARSER_LOSS` | 4101825.66 | |
chaitanya97/custom_german | chaitanya97 | 2021-10-25T16:27:15Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: custom_german
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# custom_german
This model is a fine-tuned version of [flozi00/wav2vec-xlsr-german](https://huggingface.co/flozi00/wav2vec-xlsr-german) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6832
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 8.7718 | 5.0 | 5 | 8.5148 | 1.0 |
| 3.7125 | 10.0 | 10 | 5.4304 | 1.0 |
| 2.7679 | 15.0 | 15 | 5.0388 | 1.0 |
| 2.0516 | 20.0 | 20 | 4.4628 | 1.0 |
| 1.6702 | 25.0 | 25 | 4.5341 | 1.0 |
| 1.515 | 30.0 | 30 | 4.6832 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
kwang2049/TSDAE-twitterpara | kwang2049 | 2021-10-25T16:18:44Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | # kwang2049/TSDAE-twitterpara2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on twitterpara in an unsupervised manner. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on twitterpara with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'twitterpara'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'twitterpara'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
kwang2049/TSDAE-cqadupstack | kwang2049 | 2021-10-25T16:18:29Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | # kwang2049/TSDAE-cqadupstack2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on cqadupstack in an unsupervised manner. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on cqadupstack with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
patrickvonplaten/wav2vec2-base-repro-timit | patrickvonplaten | 2021-10-25T16:17:50Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"timit_asr",
"generated_from_trainer",
"dataset:timit_asr",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: wav2vec2-base-repro-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-repro-timit
This model is a fine-tuned version of [patrickvonplaten/wav2vec2-base-repro-960h-libri-85k-steps](https://huggingface.co/patrickvonplaten/wav2vec2-base-repro-960h-libri-85k-steps) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8562
- Wer: 0.5484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.9793 | 0.69 | 100 | 5.4532 | 1.0 |
| 2.9066 | 1.38 | 200 | 2.9070 | 1.0 |
| 2.2562 | 2.07 | 300 | 2.0323 | 1.0 |
| 1.5273 | 2.76 | 400 | 1.1510 | 0.8001 |
| 1.1085 | 3.45 | 500 | 0.9521 | 0.7053 |
| 0.813 | 4.14 | 600 | 0.8617 | 0.6702 |
| 0.8434 | 4.83 | 700 | 0.8068 | 0.6393 |
| 0.9631 | 5.52 | 800 | 0.7863 | 0.6248 |
| 0.707 | 6.21 | 900 | 0.7476 | 0.5973 |
| 0.5568 | 6.9 | 1000 | 0.7350 | 0.5911 |
| 0.6171 | 7.59 | 1100 | 0.7171 | 0.5841 |
| 0.7011 | 8.28 | 1200 | 0.7318 | 0.5798 |
| 0.5546 | 8.97 | 1300 | 0.7447 | 0.5767 |
| 0.4278 | 9.66 | 1400 | 0.7481 | 0.5650 |
| 0.3576 | 10.34 | 1500 | 0.7443 | 0.5713 |
| 0.5506 | 11.03 | 1600 | 0.7574 | 0.5664 |
| 0.4127 | 11.72 | 1700 | 0.8043 | 0.5631 |
| 0.3251 | 12.41 | 1800 | 0.7738 | 0.5550 |
| 0.3119 | 13.1 | 1900 | 0.7829 | 0.5516 |
| 0.4371 | 13.79 | 2000 | 0.8025 | 0.5556 |
| 0.3772 | 14.48 | 2100 | 0.8451 | 0.5559 |
| 0.2942 | 15.17 | 2200 | 0.8300 | 0.5556 |
| 0.2503 | 15.86 | 2300 | 0.8417 | 0.5541 |
| 0.3671 | 16.55 | 2400 | 0.8568 | 0.5528 |
| 0.3867 | 17.24 | 2500 | 0.8521 | 0.5510 |
| 0.2614 | 17.93 | 2600 | 0.8479 | 0.5523 |
| 0.2441 | 18.62 | 2700 | 0.8558 | 0.5494 |
| 0.3059 | 19.31 | 2800 | 0.8553 | 0.5474 |
| 0.3734 | 20.0 | 2900 | 0.8562 | 0.5484 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
kwang2049/TSDAE-cqadupstack2nli_stsb | kwang2049 | 2021-10-25T16:14:19Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | # kwang2049/TSDAE-cqadupstack2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain cqadupstack. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on cqadupstack with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
kwang2049/TSDAE-askubuntu2nli_stsb | kwang2049 | 2021-10-25T16:13:34Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | # kwang2049/TSDAE-askubuntu2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain AskUbuntu. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on AskUbuntu with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'askubuntu'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'askubuntu'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
napoler/bart-chinese-6-960-words-pkuseg | napoler | 2021-10-25T15:05:51Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | # 使用
这个模型是在uer/bart-chinese-6-960-cluecorpussmall基础上训练的,数据量不是很大,但是修改了默认分词。
使用pkuseg分词,禁用BertTokenizer的do_basic_tokenize分词,不禁用do_basic_tokenize的话会把正常词汇按照逐字分词,禁用后可以导入自己的分词方案。
pip install git+https://github.com/napoler/tkit-AutoTokenizerPosition
```python
import pkuseg
from tkitAutoTokenizerPosition.AutoPos import AutoPos
seg = pkuseg.pkuseg(model_name='medicine') # 程序会自动下载所对应的细领域模型
tokenizer = BertTokenizer.from_pretrained("uer/chinese_roberta_L-2_H-128",do_basic_tokenize=False)
ATP=AutoPos(seg,tokenizer)
# 清理文本中的问题
ATP.getTokenize(text)
```
分词结果如下
```
['他', '##们', '的', '伤', '##害', ',', '以', '##及', '陷', '##阱', '能', '##力', '的', '组', '##合', ',', '猎', '##人', '对', '##于', '任', '##何', '团', '##队', '都', '是', '最', '##好', '的', '拉', '##怪', '##者', '.'], 'cut': ['他们', '的', '伤害', ',', '以及', '陷阱', '能力', '的', '组合', ',', '猎人', '对于', '任何', '团队', '都', '是', '最好', '的', '拉怪者', '.']
```
https://www.kaggle.com/terrychanorg/napolerbartchinese6960wordspkuseg
https://www.kaggle.com/terrychanorg/buliddataforbert-7803feff2
https://www.kaggle.com/terrychanorg/bart-notebook8wewew6eeb0f8af
https://www.kaggle.com/terrychanorg/fork-of-bart-notebook8wewew6eeb0f8af/data?scriptVersionId=77962540
|
teacookies/autonlp-more_fine_tune_24465520-26265908 | teacookies | 2021-10-25T09:36:35Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 96.32087452115675
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265908
- CO2 Emissions (in grams): 96.32087452115675
## Validation Metrics
- Loss: 0.5696008801460266
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265908
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265908", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265908", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265911 | teacookies | 2021-10-25T09:35:36Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 97.58591836686978
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265911
- CO2 Emissions (in grams): 97.58591836686978
## Validation Metrics
- Loss: 6.2383246421813965
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265911
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265911", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265911", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265907 | teacookies | 2021-10-25T09:35:36Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 103.5636883689371
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265907
- CO2 Emissions (in grams): 103.5636883689371
## Validation Metrics
- Loss: 0.6072460412979126
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265907
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265907", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265907", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265905 | teacookies | 2021-10-25T09:32:48Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 103.35758036182682
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265905
- CO2 Emissions (in grams): 103.35758036182682
## Validation Metrics
- Loss: 0.5223112106323242
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265905
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265905", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265905", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265898 | teacookies | 2021-10-25T09:22:22Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 82.78379967029494
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265898
- CO2 Emissions (in grams): 82.78379967029494
## Validation Metrics
- Loss: 0.5732079148292542
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265898
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265898", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265898", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265902 | teacookies | 2021-10-25T09:22:00Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 83.78453848505326
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265902
- CO2 Emissions (in grams): 83.78453848505326
## Validation Metrics
- Loss: 0.5470030903816223
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265902
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265902", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265902", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265910 | teacookies | 2021-10-25T09:21:45Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 77.64468929470678
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265910
- CO2 Emissions (in grams): 77.64468929470678
## Validation Metrics
- Loss: 5.950643062591553
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265910
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265910", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265910", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265897 | teacookies | 2021-10-25T09:21:10Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 81.7509252560808
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265897
- CO2 Emissions (in grams): 81.7509252560808
## Validation Metrics
- Loss: 0.5754176378250122
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265897
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265897", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265897", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265901 | teacookies | 2021-10-25T09:21:03Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 80.04360178242067
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265901
- CO2 Emissions (in grams): 80.04360178242067
## Validation Metrics
- Loss: 0.5551259517669678
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265901
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265901", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265901", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265909 | teacookies | 2021-10-25T09:20:12Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 80.25874179679201
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265909
- CO2 Emissions (in grams): 80.25874179679201
## Validation Metrics
- Loss: 5.950643062591553
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265909
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265909", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265909", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
tftransformers/t5-small | tftransformers | 2021-10-25T08:13:06Z | 4 | 0 | transformers | [
"transformers",
"summarization",
"translation",
"en",
"fr",
"ro",
"de",
"dataset:c4",
"arxiv:1910.10683",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
language:
- en
- fr
- ro
- de
datasets:
- c4
tags:
- summarization
- translation
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Other Community Checkpoints: [here](https://huggingface.co/models?search=t5)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpusâ€Â, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Usage
```
from tf_transformers.models import T5Model
# Any T5 model (t5-small, t5-base, t5-large etc)
model_name = 't5-small'
model = T5Model.from_pretrained(model_name)
``` |
ydshieh/vit-gpt2-coco-en-ckpts | ydshieh | 2021-10-24T12:01:42Z | 32 | 11 | generic | [
"generic",
"pytorch",
"jax",
"tensorboard",
"vision-encoder-decoder",
"image-classification",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
tags:
- image-classification
library_name: generic
---
## Example
The model is by no means a state-of-the-art model, but nevertheless
produces reasonable image captioning results. It was mainly fine-tuned
as a proof-of-concept for the 🤗 FlaxVisionEncoderDecoder Framework.
The model can be used as follows:
```python
import requests
from PIL import Image
from transformers import ViTFeatureExtractor, AutoTokenizer, FlaxVisionEncoderDecoderModel
loc = "ydshieh/vit-gpt2-coco-en"
feature_extractor = ViTFeatureExtractor.from_pretrained(loc)
tokenizer = AutoTokenizer.from_pretrained(loc)
model = FlaxVisionEncoderDecoderModel.from_pretrained(loc)
# We will verify our results on an image of cute cats
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
with Image.open(requests.get(url, stream=True).raw) as img:
pixel_values = feature_extractor(images=img, return_tensors="np").pixel_values
def generate_step(pixel_values):
output_ids = model.generate(pixel_values, max_length=16, num_beams=4).sequences
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
preds = [pred.strip() for pred in preds]
return preds
preds = generate_step(pixel_values)
print(preds)
# should produce
# ['a cat laying on top of a couch next to another cat']
``` |
tftransformers/gpt2 | tftransformers | 2021-10-24T08:41:46Z | 1 | 0 | transformers | [
"transformers",
"exbert",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: en
tags:
- exbert
license: mit
---
# GPT-2
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
from tf_transformers.models import GPT2Model
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained("gpt2")
text = "Replace me by any text you'd like."
inputs_tf = {}
inputs = tokenizer(text, return_tensors='tf')
inputs_tf["input_ids"] = inputs["input_ids"]
outputs_tf = model(inputs_tf)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a> |
tftransformers/albert-xxlarge-v2 | tftransformers | 2021-10-24T08:39:00Z | 3 | 0 | transformers | [
"transformers",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# ALBERT XXLarge v2
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 768 hidden dimension
- 12 attention heads
- 11M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
In tf_transformers
```python
from tf_transformers.models import AlbertModel
from transformers import AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v2')
model = AlbertModel.from_pretrained("albert-xxlarge-v2")
text = "Replace me by any text you'd like."
inputs_tf = {}
inputs = tokenizer(text, return_tensors='tf')
inputs_tf["input_ids"] = inputs["input_ids"]
inputs_tf["input_type_ids"] = inputs["token_type_ids"]
inputs_tf["input_mask"] = inputs["attention_mask"]
outputs_tf = model(inputs_tf)
```
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
tftransformers/albert-base-v1 | tftransformers | 2021-10-24T08:34:54Z | 2 | 0 | transformers | [
"transformers",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- exbert
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# ALBERT Base v1
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 768 hidden dimension
- 12 attention heads
- 11M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
In tf_transformers
```python
from tf_transformers.models import AlbertModel
from transformers import AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v1')
model = AlbertModel.from_pretrained("albert-base-v1")
text = "Replace me by any text you'd like."
inputs_tf = {}
inputs = tokenizer(text, return_tensors='tf')
inputs_tf["input_ids"] = inputs["input_ids"]
inputs_tf["input_type_ids"] = inputs["token_type_ids"]
inputs_tf["input_mask"] = inputs["attention_mask"]
outputs_tf = model(inputs_tf)
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=albert-base-v1">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a> |
tftransformers/mt5-small | tftransformers | 2021-10-24T08:18:10Z | 4 | 0 | transformers | [
"transformers",
"multilingual",
"dataset:mc4",
"arxiv:2010.11934",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: multilingual
datasets:
- mc4
license: apache-2.0
---
[Google's mT5](https://github.com/google-research/multilingual-t5)
mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.
**Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5)
Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934)
Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel*
## Abstract
The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available.
## Usage
```
from tf_transformers.models import MT5Model
# Any MT5 model (mt5-small, mt5-base etc)
model_name = 'mt5-small'
model = MT5Model.from_pretrained(model_name)
``` |
tftransformers/t5-base | tftransformers | 2021-10-24T08:16:17Z | 3 | 0 | transformers | [
"transformers",
"summarization",
"translation",
"en",
"fr",
"ro",
"de",
"dataset:c4",
"arxiv:1910.10683",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
language:
- en
- fr
- ro
- de
datasets:
- c4
tags:
- summarization
- translation
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Other Community Checkpoints: [here](https://huggingface.co/models?search=t5)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpusâ€Â, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Usage
```
from tf_transformers.models import T5Model
# Any T5 model (t5-small, t5-base, t5-large etc)
model_name = 't5-small'
model = T5Model.from_pretrained(model_name)
``` |
tftransformers/t5-large | tftransformers | 2021-10-24T08:15:07Z | 2 | 0 | transformers | [
"transformers",
"summarization",
"translation",
"en",
"fr",
"ro",
"de",
"dataset:c4",
"arxiv:1910.10683",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
language:
- en
- fr
- ro
- de
datasets:
- c4
tags:
- summarization
- translation
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Other Community Checkpoints: [here](https://huggingface.co/models?search=t5)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpusâ€, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Usage
```
from tf_transformers.models import T5Model
# Any T5 model (t5-small, t5-base, t5-large etc)
model_name = 't5-small'
model = T5Model.from_pretrained(model_name)
```
|
mathew/layoutlmv2-finetuned-funsd-1024 | mathew | 2021-10-24T06:13:48Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-finetuned-funsd-1024
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-funsd-1024
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.0+cu101
- Datasets 1.14.0
- Tokenizers 0.10.3
|
huggingartists/sqwore | huggingartists | 2021-10-24T04:23:45Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/sqwore",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/sqwore
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/3557a234d4c5912569afbea078a23eff.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Sqwore</div>
<a href="https://genius.com/artists/sqwore">
<div style="text-align: center; font-size: 14px;">@sqwore</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Sqwore.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/sqwore).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/sqwore")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3gzd5crq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Sqwore's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/vzeft23g) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/vzeft23g/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/sqwore')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/sqwore")
model = AutoModelWithLMHead.from_pretrained("huggingartists/sqwore")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingtweets/praisegodbarbon | huggingtweets | 2021-10-24T03:47:17Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/praisegodbarbon/1635047234116/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1381764452098437120/74IgKP07_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Boston Psychology PhD</div>
<div style="text-align: center; font-size: 14px;">@praisegodbarbon</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Boston Psychology PhD.
| Data | Boston Psychology PhD |
| --- | --- |
| Tweets downloaded | 3212 |
| Retweets | 810 |
| Short tweets | 265 |
| Tweets kept | 2137 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/h4r5tyq8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @praisegodbarbon's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1o2225sd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1o2225sd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/praisegodbarbon')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ddddd/EDCLasVegas | ddddd | 2021-10-24T01:16:07Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | https://teespring.com/dashboard/listings/113925135/edit |
huggingtweets/nikkihaleyfan93 | huggingtweets | 2021-10-23T22:45:26Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/nikkihaleyfan93/1635029077906/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1329566476987232256/wpiYdhhz_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Richard Smit 🦅 🚁 🚔 💰 🇻🇦 🇳🇱 🇺🇸 🇬🇧 🇮🇱</div>
<div style="text-align: center; font-size: 14px;">@nikkihaleyfan93</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Richard Smit 🦅 🚁 🚔 💰 🇻🇦 🇳🇱 🇺🇸 🇬🇧 🇮🇱.
| Data | Richard Smit 🦅 🚁 🚔 💰 🇻🇦 🇳🇱 🇺🇸 🇬🇧 🇮🇱 |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 406 |
| Short tweets | 255 |
| Tweets kept | 2587 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/20va5xqa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nikkihaleyfan93's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1v26x5ax) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1v26x5ax/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nikkihaleyfan93')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
espnet/kan-bayashi_ljspeech_joint_finetune_conformer_fastspeech2_hifigan | espnet | 2021-10-23T20:55:12Z | 17 | 16 | espnet | [
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: en
datasets:
- ljspeech
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/ljspeech_joint_finetune_conformer_fastspeech2_hifigan`
♻️ Imported from https://zenodo.org/record/5498896/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
espnet/kan-bayashi_ljspeech_joint_train_conformer_fastspeech2_hifigan | espnet | 2021-10-23T20:54:48Z | 3 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: en
datasets:
- ljspeech
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/ljspeech_joint_train_conformer_fastspeech2_hifigan`
♻️ Imported from https://zenodo.org/record/5498487/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
espnet/kan-bayashi_libritts_tts_train_xvector_vits_raw_phn_tacotron_g2p_en_no-truncated-09d645 | espnet | 2021-10-23T20:51:46Z | 0 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: en
datasets:
- libritts
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/libritts_tts_train_xvector_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5521416/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
espnet/kan-bayashi_tsukuyomi_tts_finetune_full_band_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest | espnet | 2021-10-23T20:50:21Z | 0 | 3 | espnet | [
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:tsukuyomi",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: ja
datasets:
- tsukuyomi
license: cc-by-4.0
---
## ESPnet2 TTS pretrained model
### `kan-bayashi/tsukuyomi_tts_finetune_full_band_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest`
♻️ Imported from https://zenodo.org/record/5521446/
This model was trained by kan-bayashi using tsukuyomi/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Subsets and Splits