modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
income/jpq-genq-bioasq-question_encoder-base-msmarco-distilbert-tas-b | 3fc4d529a7df18305926dc915d774105ad511131 | 2022-06-16T18:34:00.000Z | [
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false | income | null | income/jpq-genq-bioasq-question_encoder-base-msmarco-distilbert-tas-b | 0 | null | transformers | 38,200 | ---
license: apache-2.0
---
|
income/jpq-genq-bioasq-document_encoder-base-msmarco-distilbert-tas-b | 7335c0e4daf8c460f5c0702b485136097497fdb9 | 2022-06-16T18:34:34.000Z | [
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false | income | null | income/jpq-genq-bioasq-document_encoder-base-msmarco-distilbert-tas-b | 0 | null | transformers | 38,201 | ---
license: apache-2.0
---
|
income/jpq-gpl-bioasq-question_encoder-base-msmarco-distilbert-tas-b | 4c02275cf5f0604f0133400d4fa2861075d89a79 | 2022-06-16T18:35:51.000Z | [
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false | income | null | income/jpq-gpl-bioasq-question_encoder-base-msmarco-distilbert-tas-b | 0 | null | transformers | 38,202 | ---
license: apache-2.0
---
|
income/jpq-gpl-bioasq-document_encoder-base-msmarco-distilbert-tas-b | b3cca6331820bd8ff9b37805e5149eafb0d569bd | 2022-06-16T18:36:57.000Z | [
"pytorch",
"distilbert",
"transformers",
"license:apache-2.0"
] | null | false | income | null | income/jpq-gpl-bioasq-document_encoder-base-msmarco-distilbert-tas-b | 0 | null | transformers | 38,203 | ---
license: apache-2.0
---
|
huggingtweets/alanrmacleod-karl_was_right-yaboihakim | 7af682f4d4dee25a3016adda5f3612ef9a29e23b | 2022-06-16T19:29:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/alanrmacleod-karl_was_right-yaboihakim | 0 | null | transformers | 38,204 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1521992020977348609/RrM3MB-G_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1412117139071418386/3bmc9Vk7_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1067405915077468161/tRoXWi8G_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Michael Parenti’s Stache 🚩☭ & Alan MacLeod & Hakim</div>
<div style="text-align: center; font-size: 14px;">@alanrmacleod-karl_was_right-yaboihakim</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Michael Parenti’s Stache 🚩☭ & Alan MacLeod & Hakim.
| Data | Michael Parenti’s Stache 🚩☭ | Alan MacLeod | Hakim |
| --- | --- | --- | --- |
| Tweets downloaded | 3236 | 3244 | 2415 |
| Retweets | 283 | 480 | 709 |
| Short tweets | 360 | 177 | 139 |
| Tweets kept | 2593 | 2587 | 1567 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/38bj8kvf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alanrmacleod-karl_was_right-yaboihakim's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1klcaw4v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1klcaw4v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/alanrmacleod-karl_was_right-yaboihakim')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
philmunz/poc_ud | 73a43eda245a9e8b997ad5d8d89a400e8c8393cf | 2022-06-16T19:32:02.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | philmunz | null | philmunz/poc_ud | 0 | null | transformers | 38,205 | Entry not found |
ouiame/bertGpt2Summ | 0cc039f2a24ea33d49430cad31c9a7dda8c11b0f | 2022-06-17T00:38:07.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"unk",
"dataset:ouiame/autotrain-data-Robertatogpt2",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | ouiame | null | ouiame/bertGpt2Summ | 0 | null | transformers | 38,206 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ouiame/autotrain-data-Robertatogpt2
co2_eq_emissions: 2.4722651844547827
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 995132940
- CO2 Emissions (in grams): 2.4722651844547827
## Validation Metrics
- Loss: 3.5972988605499268
- Rouge1: 16.1218
- Rouge2: 2.9195
- RougeL: 13.0085
- RougeLsum: 13.2975
- Gen Len: 19.9962
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ouiame/autotrain-Robertatogpt2-995132940
``` |
ouiame/autotrain-Robertatogpt2-995132944 | 233288662c0f9d701a7d174bd461cfc1057b4cd2 | 2022-06-17T01:09:06.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"unk",
"dataset:ouiame/autotrain-data-Robertatogpt2",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | ouiame | null | ouiame/autotrain-Robertatogpt2-995132944 | 0 | null | transformers | 38,207 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ouiame/autotrain-data-Robertatogpt2
co2_eq_emissions: 611.0958349328379
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 995132944
- CO2 Emissions (in grams): 611.0958349328379
## Validation Metrics
- Loss: 3.8850467205047607
- Rouge1: 16.6344
- Rouge2: 2.9899
- RougeL: 13.5872
- RougeLsum: 13.9042
- Gen Len: 20.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ouiame/autotrain-Robertatogpt2-995132944
``` |
usaf/ztranslate | a8b41ea9dc45287b7195035fe2c1deec0d585bbf | 2022-06-16T23:32:24.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | usaf | null | usaf/ztranslate | 0 | null | transformers | 38,208 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: ztranslate
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ztranslate
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-sw](https://huggingface.co/Helsinki-NLP/opus-mt-en-sw) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 113 | 0.9276 | 48.8401 | 19.9436 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/chrisevans-robertdowneyjr | 828b3796cebc9a3001ff43989c79d1c241065e72 | 2022-06-16T20:34:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/chrisevans-robertdowneyjr | 0 | null | transformers | 38,209 | ---
language: en
thumbnail: http://www.huggingtweets.com/chrisevans-robertdowneyjr/1655411636421/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1353806309397655553/0zEtkDvx_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1320917504013848577/-VTJLuI9_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Robert Downey Jr & Chris Evans</div>
<div style="text-align: center; font-size: 14px;">@chrisevans-robertdowneyjr</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Robert Downey Jr & Chris Evans.
| Data | Robert Downey Jr | Chris Evans |
| --- | --- | --- |
| Tweets downloaded | 875 | 2075 |
| Retweets | 154 | 684 |
| Short tweets | 70 | 209 |
| Tweets kept | 651 | 1182 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2a0abddd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chrisevans-robertdowneyjr's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/hfbdxz6f) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/hfbdxz6f/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chrisevans-robertdowneyjr')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/leisha_hailey | dd76a73877d52356613afeb4b26d78beb79e50b8 | 2022-06-16T22:08:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/leisha_hailey | 0 | null | transformers | 38,210 | ---
language: en
thumbnail: http://www.huggingtweets.com/leisha_hailey/1655417283179/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1601201593/Screen_shot_2011-10-20_at_8.42.01_PM_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Leisha Hailey</div>
<div style="text-align: center; font-size: 14px;">@leisha_hailey</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Leisha Hailey.
| Data | Leisha Hailey |
| --- | --- |
| Tweets downloaded | 1084 |
| Retweets | 77 |
| Short tweets | 66 |
| Tweets kept | 941 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ecfevcj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @leisha_hailey's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/vat0dsmp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/vat0dsmp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/leisha_hailey')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/jbsalvagno | 20e4b35147db51851b11ed7f75a624dd4b06c3f6 | 2022-06-16T22:41:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/jbsalvagno | 0 | null | transformers | 38,211 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/817874051146412032/rPvqTOFF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Javier Bustos</div>
<div style="text-align: center; font-size: 14px;">@jbsalvagno</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Javier Bustos.
| Data | Javier Bustos |
| --- | --- |
| Tweets downloaded | 3179 |
| Retweets | 2756 |
| Short tweets | 30 |
| Tweets kept | 393 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/29wlz981/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jbsalvagno's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/k72pz4ho) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/k72pz4ho/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jbsalvagno')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
openclimatefix/graph-weather-forecaster-2.0deg | 472840e5bb102fed3970217fef30d8d02a468a40 | 2022-07-04T06:47:16.000Z | [
"pytorch"
] | null | false | openclimatefix | null | openclimatefix/graph-weather-forecaster-2.0deg | 0 | null | null | 38,212 | Entry not found |
gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram | eb891439357dca31c51d684e736315056eb5b148 | 2022-06-18T02:02:29.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram | 0 | null | transformers | 38,213 | Entry not found |
neuralmagic/oBERT-12-upstream-pruned-unstructured-90-v2 | a57aeb0634e93606c033f2b23e58afc7af8e5b2d | 2022-06-20T11:36:51.000Z | [
"pytorch",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2203.07259",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression"
] | null | false | neuralmagic | null | neuralmagic/oBERT-12-upstream-pruned-unstructured-90-v2 | 0 | null | null | 38,214 | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets:
- bookcorpus
- wikipedia
---
# oBERT-12-upstream-pruned-unstructured-90-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the upstream pruned model used as a starting point for sparse-transfer learning to downstream tasks presented in the `Table 2 - oBERT - {SQuADv1, MNLI, QQP} - 90%` (in the upcoming updated version of the paper).
Finetuned versions of this model for each downstream task are:
- SQuADv1: `neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-squadv1-v2`
- MNLI: `neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-mnli-v2`
- QQP: `neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-qqp-v2`
```
Pruning method: oBERT upstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: BookCorpus and English Wikipedia
Sparsity: 90%
Number of layers: 12
```
Code: _coming soon_
## BibTeX entry and citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-12-upstream-pruned-unstructured-97-v2 | 0f66c665cfbc8f9926befaae96562c9453e17692 | 2022-06-20T11:36:51.000Z | [
"pytorch",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2203.07259",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression"
] | null | false | neuralmagic | null | neuralmagic/oBERT-12-upstream-pruned-unstructured-97-v2 | 0 | null | null | 38,215 | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets:
- bookcorpus
- wikipedia
---
# oBERT-12-upstream-pruned-unstructured-97-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the upstream pruned model used as a starting point for sparse-transfer learning to downstream tasks presented in the `Table 2 - oBERT - {SQuADv1, MNLI, QQP} - 97%` (in the upcoming updated version of the paper).
Finetuned versions of this model for each downstream task are:
- SQuADv1: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-squadv1-v2`
- MNLI: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-mnli-v2`
- QQP: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-qqp-v2`
```
Pruning method: oBERT upstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: BookCorpus and English Wikipedia
Sparsity: 97%
Number of layers: 12
```
Code: _coming soon_
## BibTeX entry and citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-squadv1-v2 | b75c87c3ff3c68e83244700a14fe54fdb2b01be6 | 2022-06-20T11:36:50.000Z | [
"pytorch",
"en",
"dataset:squad",
"arxiv:2203.07259",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression"
] | null | false | neuralmagic | null | neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-squadv1-v2 | 0 | null | null | 38,216 | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-upstream-pruned-unstructured-90-finetuned-squadv1-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - SQuADv1 90%` (in the upcoming updated version of the paper).
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over four seeds, and we release the best model (marked with `(*)`):
```
| oBERT 90% | F1 | EM |
| ------------ | ----- | ----- |
| seed=42 | 88.55 | 81.48 |
| seed=3407 | 88.34 | 81.25 |
| seed=123 (*)| 88.64 | 81.57 |
| seed=12345 | 88.44 | 81.43 |
| ------------ | ----- | ----- |
| mean | 88.49 | 81.43 |
| stdev | 0.130 | 0.134 |
```
Code: _coming soon_
## BibTeX entry and citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-squadv1-v2 | 6d5b9d18b4a5593678b43da70847b31ddd8e5767 | 2022-06-20T11:36:51.000Z | [
"pytorch",
"en",
"dataset:squad",
"arxiv:2203.07259",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression"
] | null | false | neuralmagic | null | neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-squadv1-v2 | 0 | null | null | 38,217 | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-upstream-pruned-unstructured-97-finetuned-squadv1-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - SQuADv1 97%` (in the upcoming updated version of the paper).
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 97%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over four seeds, and we release the best model (marked with `(*)`):
```
| oBERT 97% | F1 | EM |
| ------------- | ----- | ----- |
| seed=42 | 84.92 | 76.94 |
| seed=3407 | 84.87 | 76.71 |
| seed=123 | 84.95 | 77.06 |
| seed=12345 (*)| 84.95 | 76.90 |
| ------------- | ----- | ----- |
| mean | 84.92 | 76.90 |
| stdev | 0.037 | 0.145 |
```
Code: _coming soon_
## BibTeX entry and citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-mnli-v2 | 7c106710dcac94a4ecf008ff75b657739c62beb6 | 2022-06-20T11:36:50.000Z | [
"pytorch",
"en",
"dataset:mnli",
"arxiv:2203.07259",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression"
] | null | false | neuralmagic | null | neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-mnli-v2 | 0 | null | null | 38,218 | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: mnli
---
# oBERT-12-upstream-pruned-unstructured-90-finetuned-mnli-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - MNLI 90%` (in the upcoming updated version of the paper).
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: MNLI
Sparsity: 90%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over four seeds, and we release the best model (marked with `(*)`):
```
| oBERT 90% | m-acc | mm-acc|
| ------------ | ----- | ----- |
| seed=42 | 83.45 | 84.13 |
| seed=3407 (*)| 83.45 | 83.72 |
| seed=12345 | 83.27 | 83.57 |
| seed=123 | 83.42 | 83.71 |
| ------------ | ----- | ----- |
| mean | 83.40 | 83.78 |
| stdev | 0.086 | 0.241 |
```
Code: _coming soon_
## BibTeX entry and citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-qqp-v2 | b62c1834614d446926cb778f2e632442b9f48944 | 2022-06-20T11:36:50.000Z | [
"pytorch",
"en",
"dataset:qqp",
"arxiv:2203.07259",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression"
] | null | false | neuralmagic | null | neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-qqp-v2 | 0 | null | null | 38,219 | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: qqp
---
# oBERT-12-upstream-pruned-unstructured-90-finetuned-qqp-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - QQP 90%` (in the upcoming updated version of the paper).
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: QQP
Sparsity: 90%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over four seeds, and we release the best model (marked with `(*)`):
```
| oBERT 90% | acc | F1 |
| ------------- | ----- | ----- |
| seed=42 | 90.94 | 87.79 |
| seed=3407 | 91.00 | 87.81 |
| seed=123 | 90.94 | 87.73 |
| seed=12345 (*)| 91.07 | 87.92 |
| ------------- | ----- | ----- |
| mean | 90.99 | 87.81 |
| stdev | 0.061 | 0.079 |
```
Code: _coming soon_
## BibTeX entry and citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-qqp-v2 | ed7c12aba3c65881e5bc521bce0335fb08835a65 | 2022-06-20T11:36:51.000Z | [
"pytorch",
"en",
"dataset:qqp",
"arxiv:2203.07259",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression"
] | null | false | neuralmagic | null | neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-qqp-v2 | 0 | null | null | 38,220 | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: qqp
---
# oBERT-12-upstream-pruned-unstructured-97-finetuned-qqp-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - QQP 97%` (in the upcoming updated version of the paper).
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: QQP
Sparsity: 97%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over four seeds, and we release the best model (marked with `(*)`):
```
| oBERT 97% | acc | F1 |
| ------------ | ----- | ----- |
| seed=42 (*)| 90.42 | 87.09 |
| seed=3407 | 90.31 | 86.87 |
| seed=123 | 90.20 | 86.76 |
| seed=12345 | 90.39 | 87.16 |
| ------------ | ----- | ----- |
| mean | 90.33 | 86.97 |
| stdev | 0.098 | 0.186 |
```
Code: _coming soon_
## BibTeX entry and citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
marcomameli01/segformer-b0-finetuned-segments-gear2 | 87726a2ce74d2f2d4ddcb4e74bc351728bceadbd | 2022-06-17T08:03:25.000Z | [
"pytorch",
"tensorboard",
"segformer",
"transformers",
"vision",
"gear-segmentation",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | null | false | marcomameli01 | null | marcomameli01/segformer-b0-finetuned-segments-gear2 | 0 | null | transformers | 38,221 | ---
license: apache-2.0
tags:
- vision
- gear-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-gear2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-gear2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the marcomameli01/gear dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1268
- Mean Iou: 0.1254
- Mean Accuracy: 0.2509
- Overall Accuracy: 0.2509
- Per Category Iou: [0.0, 0.2508641975308642]
- Per Category Accuracy: [nan, 0.2508641975308642]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------------------:|:--------------------------:|
| 0.4614 | 5.0 | 20 | 0.4427 | 0.0741 | 0.1481 | 0.1481 | [0.0, 0.14814814814814814] | [nan, 0.14814814814814814] |
| 0.3327 | 10.0 | 40 | 0.2933 | 0.1726 | 0.3453 | 0.3453 | [0.0, 0.34528395061728395] | [nan, 0.34528395061728395] |
| 0.2305 | 15.0 | 60 | 0.2244 | 0.0382 | 0.0763 | 0.0763 | [0.0, 0.07634567901234568] | [nan, 0.07634567901234568] |
| 0.2011 | 20.0 | 80 | 0.2130 | 0.0374 | 0.0748 | 0.0748 | [0.0, 0.07476543209876543] | [nan, 0.07476543209876543] |
| 0.1846 | 25.0 | 100 | 0.1672 | 0.1037 | 0.2073 | 0.2073 | [0.0, 0.20730864197530866] | [nan, 0.20730864197530866] |
| 0.1622 | 30.0 | 120 | 0.1532 | 0.0805 | 0.1611 | 0.1611 | [0.0, 0.1610864197530864] | [nan, 0.1610864197530864] |
| 0.139 | 35.0 | 140 | 0.1396 | 0.0971 | 0.1942 | 0.1942 | [0.0, 0.19417283950617284] | [nan, 0.19417283950617284] |
| 0.1342 | 40.0 | 160 | 0.1283 | 0.0748 | 0.1496 | 0.1496 | [0.0, 0.14962962962962964] | [nan, 0.14962962962962964] |
| 0.128 | 45.0 | 180 | 0.1224 | 0.1128 | 0.2256 | 0.2256 | [0.0, 0.22558024691358025] | [nan, 0.22558024691358025] |
| 0.1243 | 50.0 | 200 | 0.1268 | 0.1254 | 0.2509 | 0.2509 | [0.0, 0.2508641975308642] | [nan, 0.2508641975308642] |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huggingtweets/iantdr | d71f7a37b1a94fb91412994903ead7dd466e42d4 | 2022-06-17T09:09:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/iantdr | 0 | null | transformers | 38,222 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1365703183/YT_Croydon_Flyer_twitter_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ian anderson</div>
<div style="text-align: center; font-size: 14px;">@iantdr</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ian anderson.
| Data | ian anderson |
| --- | --- |
| Tweets downloaded | 3201 |
| Retweets | 2052 |
| Short tweets | 316 |
| Tweets kept | 833 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bopfm9o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iantdr's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1papgk0r) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1papgk0r/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/iantdr')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/aiww-bbcworld-elonmusk | 0f615df728e594fedda865f900887585bce1a619 | 2022-06-17T14:04:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/aiww-bbcworld-elonmusk | 0 | null | transformers | 38,223 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529956155937759233/Nyn1HZWF_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529107170448523264/q3VwEx38_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2972716369/e27a35486a2ec507063cb19c89e3ce82_400x400.jpeg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & BBC News (World) & 艾未未 Ai Weiwei</div>
<div style="text-align: center; font-size: 14px;">@aiww-bbcworld-elonmusk</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & BBC News (World) & 艾未未 Ai Weiwei.
| Data | Elon Musk | BBC News (World) | 艾未未 Ai Weiwei |
| --- | --- | --- | --- |
| Tweets downloaded | 3200 | 3250 | 3243 |
| Retweets | 145 | 240 | 680 |
| Short tweets | 966 | 0 | 2116 |
| Tweets kept | 2089 | 3010 | 447 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1xg6gwun/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aiww-bbcworld-elonmusk's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3f692l8n) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3f692l8n/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/aiww-bbcworld-elonmusk')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tmc/xbert2 | bfbeb652e95ce587fbb31851ea4bef989ac06a13 | 2022-06-17T15:29:41.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | tmc | null | tmc/xbert2 | 0 | null | transformers | 38,224 | Entry not found |
huggingtweets/hillaryclinton | 5acb83cd70d4df04f1095e134724bb5092b277d9 | 2022-06-17T17:56:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/hillaryclinton | 0 | null | transformers | 38,225 | ---
language: en
thumbnail: http://www.huggingtweets.com/hillaryclinton/1655488304536/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1291192333199958017/SvH8J8_P_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Hillary Clinton</div>
<div style="text-align: center; font-size: 14px;">@hillaryclinton</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Hillary Clinton.
| Data | Hillary Clinton |
| --- | --- |
| Tweets downloaded | 3205 |
| Retweets | 781 |
| Short tweets | 63 |
| Tweets kept | 2361 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/29ye0y4d/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hillaryclinton's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/oqt4g13v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/oqt4g13v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hillaryclinton')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/pdchina | d19ddce2835e11c5472d2f92ad1bd16d433d47a2 | 2022-06-17T18:03:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/pdchina | 0 | null | transformers | 38,226 | ---
language: en
thumbnail: http://www.huggingtweets.com/pdchina/1655488982839/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1246469365089939456/jAjE_fKB_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">People's Daily, China</div>
<div style="text-align: center; font-size: 14px;">@pdchina</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from People's Daily, China.
| Data | People's Daily, China |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 20 |
| Short tweets | 2 |
| Tweets kept | 3228 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3b8is5jg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pdchina's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3rg0kmkg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3rg0kmkg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pdchina')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/itsamedevdev | e5de039f81ec6321f0a41c556642ef56d0dfa4ca | 2022-06-17T20:01:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/itsamedevdev | 0 | null | transformers | 38,227 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1502217816421941249/jOIqVIE2_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ItAMeDevDev</div>
<div style="text-align: center; font-size: 14px;">@itsamedevdev</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ItAMeDevDev.
| Data | ItAMeDevDev |
| --- | --- |
| Tweets downloaded | 2842 |
| Retweets | 1052 |
| Short tweets | 474 |
| Tweets kept | 1316 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/lr4yyk0f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itsamedevdev's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2advtlvo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2advtlvo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/itsamedevdev')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ouiame/bert2gpt2frenchSumm | a9db5822d3fe2ccf1d788938948cd9a9a6890a9a | 2022-06-18T06:31:16.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"unk",
"dataset:ouiame/autotrain-data-orangesum",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | ouiame | null | ouiame/bert2gpt2frenchSumm | 0 | 1 | transformers | 38,228 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ouiame/autotrain-data-orangesum
co2_eq_emissions: 999.838587232387
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1000833138
- CO2 Emissions (in grams): 999.838587232387
## Validation Metrics
- Loss: 2.4244203567504883
- Rouge1: 25.7023
- Rouge2: 8.5872
- RougeL: 18.6776
- RougeLsum: 19.821
- Gen Len: 39.732
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ouiame/autotrain-orangesum-1000833138
``` |
panapelli/nlp-udesa-BertXNLI_uxv | 1150d6ddac2d8192c8dcde62adb113390c78ad48 | 2022-06-18T03:17:24.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | panapelli | null | panapelli/nlp-udesa-BertXNLI_uxv | 0 | null | transformers | 38,229 | Entry not found |
kjunelee/pegasus-samsum | fc39fdf7a0ca41285a21a4c609c1ca864d9280f6 | 2022-06-18T22:35:27.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"dataset:samsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | kjunelee | null | kjunelee/pegasus-samsum | 0 | null | transformers | 38,230 | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
pinot/wav2vec2-large-xls-r-300m-turkish-colab | 3b94bd766e1f61a42973d25b14957e46dce35fa6 | 2022-06-18T15:04:03.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | pinot | null | pinot/wav2vec2-large-xls-r-300m-turkish-colab | 0 | null | transformers | 38,231 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7642
- Wer: 0.5894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 24.5372 | 9.76 | 400 | 5.2857 | 0.9738 |
| 4.3812 | 19.51 | 800 | 3.6782 | 0.7315 |
| 1.624 | 29.27 | 1200 | 2.7642 | 0.5894 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
varie/poetry-generation-firstline-mbart-all-fi-unsorted | 9f061c333d05312e1f1157c39f492b4948273c00 | 2022-06-18T13:14:21.000Z | [
"pytorch"
] | null | false | varie | null | varie/poetry-generation-firstline-mbart-all-fi-unsorted | 0 | null | null | 38,232 | # poetry-generation-firstline-mbart-all-fi-unsorted
* `firstline`: generates the first poem line from keywords
* `mbart`: base model is [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
* `all`: trained on data from Project Gutenberg, Wikisource, Poesia publishing house
* `fi`: Finnish language
* `unsorted`: the order of input keywords does not matter when generating candidates |
varie/poetry-generation-nextline-mbart-ws-sv-multi | 9e73938898cc2cfab934139665536a9a75ce0657 | 2022-07-15T16:16:45.000Z | [
"pytorch"
] | null | false | varie | null | varie/poetry-generation-nextline-mbart-ws-sv-multi | 0 | null | null | 38,233 | # poetry-generation-nextline-mbart-ws-sv-multi
* `nextline`: generates a poem line from previous line(s)
* `mbart`: base model is [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50)
* `ws`: trained on Wikisource data
* `sv`: Swedish language
* `multi`: uses first, second, and third last lines as input for generation |
lmqg/t5-small-squadshifts-vanilla-new_wiki | ca8e91e60af78c5c55d0b7a9dda2d996c8f5ab05 | 2022-06-18T13:55:18.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-small-squadshifts-vanilla-new_wiki | 0 | null | transformers | 38,234 | Entry not found |
lmqg/t5-small-squadshifts-vanilla-nyt | a08136dff403749c4e87f1dcc27b3b4eaab4d03a | 2022-06-20T09:54:38.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-small-squadshifts-vanilla-nyt | 0 | null | transformers | 38,235 | Entry not found |
lmqg/t5-small-squadshifts-vanilla-reddit | 74dcd3a00f12307ffb9da845ab48949f9b04b16c | 2022-06-18T13:58:10.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-small-squadshifts-vanilla-reddit | 0 | null | transformers | 38,236 | Entry not found |
lmqg/t5-base-subjqa-vanilla-electronics | fc2293bdbd53d25290ae0a3c88312d8372873057 | 2022-06-18T13:59:33.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-base-subjqa-vanilla-electronics | 0 | null | transformers | 38,237 | Entry not found |
lmqg/t5-small-squadshifts-vanilla-amazon | 886be36ce99cd5adbcb946cc52ebff07deda2924 | 2022-06-18T13:59:24.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-small-squadshifts-vanilla-amazon | 0 | null | transformers | 38,238 | Entry not found |
lmqg/t5-base-subjqa-vanilla-grocery | a749be8f6af1b24f56080b12d557e5a450c9862c | 2022-06-18T14:02:29.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-base-subjqa-vanilla-grocery | 0 | null | transformers | 38,239 | Entry not found |
lmqg/t5-base-subjqa-vanilla-movies | b932e5395de4503a7ecec22a1acfcb060fe3a096 | 2022-06-18T14:05:23.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-base-subjqa-vanilla-movies | 0 | null | transformers | 38,240 | Entry not found |
lmqg/t5-base-subjqa-vanilla-restaurants | 6eeeef5212b8d34fba8250a9b395635dde71107f | 2022-06-18T14:08:29.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-base-subjqa-vanilla-restaurants | 0 | null | transformers | 38,241 | Entry not found |
lmqg/t5-base-subjqa-vanilla-tripadvisor | 0875c707d7e5fb953e3f775aa46b572c0da301c9 | 2022-06-18T14:11:02.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-base-subjqa-vanilla-tripadvisor | 0 | null | transformers | 38,242 | Entry not found |
lmqg/t5-small-subjqa-vanilla-electronics | 010c812f6c93ae009ca2b2240b22ebd5d1d40dc3 | 2022-06-20T09:54:53.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-small-subjqa-vanilla-electronics | 0 | null | transformers | 38,243 | Entry not found |
lmqg/t5-small-subjqa-vanilla-grocery | 627180da03bddc137c501d325fbee85738692235 | 2022-06-18T14:15:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-small-subjqa-vanilla-grocery | 0 | null | transformers | 38,244 | Entry not found |
lmqg/t5-small-subjqa-vanilla-movies | 875183f114b95ad2bb536e45d3c2a4f42402536f | 2022-06-20T09:55:04.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-small-subjqa-vanilla-movies | 0 | null | transformers | 38,245 | Entry not found |
lmqg/t5-small-subjqa-vanilla-restaurants | acdc49b34e65867781501926ad1a010209b4d83d | 2022-06-20T09:55:29.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-small-subjqa-vanilla-restaurants | 0 | null | transformers | 38,246 | Entry not found |
lmqg/t5-small-subjqa-vanilla-tripadvisor | 5db47dc5502273380cecbd907fe5873072ed1ab5 | 2022-06-18T14:20:28.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-small-subjqa-vanilla-tripadvisor | 0 | null | transformers | 38,247 | Entry not found |
vai6hav/wav2vec2-large-xls-r-300m-hindi-epochs15-colab | 4d4c0bc6e8b62802c35eede813e6555d33a00d8b | 2022-06-18T17:42:12.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | vai6hav | null | vai6hav/wav2vec2-large-xls-r-300m-hindi-epochs15-colab | 0 | null | transformers | 38,248 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hindi-epochs15-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi-epochs15-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5705
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 20.2764 | 5.53 | 50 | 8.1197 | 1.0 |
| 5.2964 | 11.11 | 100 | 3.5705 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
varie/poetry-generation-nextline-mbart-all-fi-single | 1e6967b4ff151e6aa7963821c8e25977f327b61c | 2022-06-18T17:52:23.000Z | [
"pytorch"
] | null | false | varie | null | varie/poetry-generation-nextline-mbart-all-fi-single | 0 | null | null | 38,249 | # poetry-generation-nextline-mbart-ws-fi-single
* `nextline`: generates a poem line from previous line(s)
* `mbart`: base model is [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
* `all`: trained on data from Project Gutenberg, Wikisource, Poesia publishing house
* `fi`: Finnish language
* `single`: uses only last poem line as input for generation |
zakria/Project_NLP | 6750d09ec9590bd4803a0db03f836de82c2a38a4 | 2022-06-18T20:44:06.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | zakria | null | zakria/Project_NLP | 0 | null | transformers | 38,250 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Project_NLP
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Project_NLP
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5324
- Wer: 0.3355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5697 | 1.0 | 500 | 2.1035 | 0.9979 |
| 0.8932 | 2.01 | 1000 | 0.5649 | 0.5621 |
| 0.4363 | 3.01 | 1500 | 0.4326 | 0.4612 |
| 0.3035 | 4.02 | 2000 | 0.4120 | 0.4191 |
| 0.2343 | 5.02 | 2500 | 0.4199 | 0.3985 |
| 0.1921 | 6.02 | 3000 | 0.4380 | 0.4043 |
| 0.1549 | 7.03 | 3500 | 0.4456 | 0.3925 |
| 0.1385 | 8.03 | 4000 | 0.4264 | 0.3871 |
| 0.1217 | 9.04 | 4500 | 0.4744 | 0.3774 |
| 0.1041 | 10.04 | 5000 | 0.4498 | 0.3745 |
| 0.0968 | 11.04 | 5500 | 0.4716 | 0.3628 |
| 0.0893 | 12.05 | 6000 | 0.4680 | 0.3764 |
| 0.078 | 13.05 | 6500 | 0.5100 | 0.3623 |
| 0.0704 | 14.06 | 7000 | 0.4893 | 0.3552 |
| 0.0659 | 15.06 | 7500 | 0.4956 | 0.3565 |
| 0.0578 | 16.06 | 8000 | 0.5450 | 0.3595 |
| 0.0563 | 17.07 | 8500 | 0.4891 | 0.3614 |
| 0.0557 | 18.07 | 9000 | 0.5307 | 0.3548 |
| 0.0447 | 19.08 | 9500 | 0.4923 | 0.3493 |
| 0.0456 | 20.08 | 10000 | 0.5156 | 0.3479 |
| 0.0407 | 21.08 | 10500 | 0.4979 | 0.3389 |
| 0.0354 | 22.09 | 11000 | 0.5549 | 0.3462 |
| 0.0322 | 23.09 | 11500 | 0.5601 | 0.3439 |
| 0.0342 | 24.1 | 12000 | 0.5131 | 0.3451 |
| 0.0276 | 25.1 | 12500 | 0.5206 | 0.3392 |
| 0.0245 | 26.1 | 13000 | 0.5337 | 0.3373 |
| 0.0226 | 27.11 | 13500 | 0.5311 | 0.3353 |
| 0.0229 | 28.11 | 14000 | 0.5375 | 0.3373 |
| 0.0225 | 29.12 | 14500 | 0.5324 | 0.3355 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
nicolasfeyer/t5-small-finetuned-la-to-en | 181f793e0095af7570451210fadf6bb8c6979ef8 | 2022-06-19T02:21:23.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | nicolasfeyer | null | nicolasfeyer/t5-small-finetuned-la-to-en | 0 | null | transformers | 38,251 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-small-finetuned-la-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-la-to-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2297
- Bleu: 5.8915
- Gen Len: 16.2252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 3.0883 | 1.0 | 4384 | 2.7499 | 2.8172 | 16.4068 |
| 2.8854 | 2.0 | 8768 | 2.5664 | 3.8141 | 16.4581 |
| 2.746 | 3.0 | 13152 | 2.4524 | 4.3903 | 16.3977 |
| 2.6617 | 4.0 | 17536 | 2.3761 | 4.7858 | 16.3473 |
| 2.6185 | 5.0 | 21920 | 2.3205 | 5.2502 | 16.3161 |
| 2.573 | 6.0 | 26304 | 2.2763 | 5.4374 | 16.2916 |
| 2.5285 | 7.0 | 30688 | 2.2489 | 5.628 | 16.2875 |
| 2.4944 | 8.0 | 35072 | 2.2276 | 5.7201 | 16.291 |
| 2.4749 | 9.0 | 39456 | 2.2164 | 5.8387 | 16.2795 |
| 2.4741 | 10.0 | 43840 | 2.2129 | 5.8654 | 16.2789 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huggingtweets/alpharad | 759478816fa765d497ac0d1bc5cad6e7f86f39f6 | 2022-06-18T23:23:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/alpharad | 0 | null | transformers | 38,252 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529214002256965632/3nndhYzR_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">jacob alpharad</div>
<div style="text-align: center; font-size: 14px;">@alpharad</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from jacob alpharad.
| Data | jacob alpharad |
| --- | --- |
| Tweets downloaded | 3233 |
| Retweets | 166 |
| Short tweets | 762 |
| Tweets kept | 2305 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ebzgfhl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alpharad's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1cdy6a8d) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1cdy6a8d/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/alpharad')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
panapelli/BertinXNLI_uxv | 26409988bdde9a923ff74fe62091520dd9cbeb4e | 2022-06-19T00:41:56.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | panapelli | null | panapelli/BertinXNLI_uxv | 0 | null | transformers | 38,253 | Entry not found |
gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v2 | 0e37304afcd04d6b5995e28f3d0f6440181e5bdf | 2022-06-19T12:14:27.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v2 | 0 | 1 | transformers | 38,254 | ---
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v2
This model is a fine-tuned version of [gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v1](https://huggingface.co/gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v1) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4313
- Wer: 0.1645
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.148 | 1.0 | 552 | 0.4313 | 0.1645 |
| 0.1301 | 2.0 | 1104 | 0.4365 | 0.1618 |
| 0.1237 | 3.0 | 1656 | 0.4470 | 0.1595 |
| 0.1063 | 4.0 | 2208 | 0.4593 | 0.1576 |
| 0.128 | 5.0 | 2760 | 0.4525 | 0.1601 |
| 0.1099 | 6.0 | 3312 | 0.4593 | 0.1567 |
| 0.0969 | 7.0 | 3864 | 0.4625 | 0.1550 |
| 0.0994 | 8.0 | 4416 | 0.4672 | 0.1543 |
| 0.125 | 9.0 | 4968 | 0.4636 | 0.1544 |
| 0.0887 | 10.0 | 5520 | 0.4601 | 0.1538 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
lmqg/t5-large-squadshifts-vanilla-new_wiki | a59ab256976a21dae75dbc352c53fe1358b88aa5 | 2022-06-19T00:39:14.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-large-squadshifts-vanilla-new_wiki | 0 | null | transformers | 38,255 | Entry not found |
huggingtweets/mysta_rias | d7443490fb8163fd3ca00f5816e23ff95b339a96 | 2022-06-19T03:40:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/mysta_rias | 0 | null | transformers | 38,256 | ---
language: en
thumbnail: http://www.huggingtweets.com/mysta_rias/1655610050415/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1533221230102433792/Dz_O5gZ7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mysta Rias 🕵️♂️🦊 NIJISANJI EN</div>
<div style="text-align: center; font-size: 14px;">@mysta_rias</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mysta Rias 🕵️♂️🦊 NIJISANJI EN.
| Data | Mysta Rias 🕵️♂️🦊 NIJISANJI EN |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 296 |
| Short tweets | 1005 |
| Tweets kept | 1944 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3r8af65s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mysta_rias's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zqhadryd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zqhadryd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mysta_rias')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
zakria/NLP_Project | 6243cd8203f1a6ab8dd70ca94d12a49f8be6076c | 2022-06-19T09:55:56.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | zakria | null | zakria/NLP_Project | 0 | null | transformers | 38,257 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: NLP_Project
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP_Project
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5308
- Wer: 0.3428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5939 | 1.0 | 500 | 2.1356 | 1.0014 |
| 0.9126 | 2.01 | 1000 | 0.5469 | 0.5354 |
| 0.4491 | 3.01 | 1500 | 0.4636 | 0.4503 |
| 0.3008 | 4.02 | 2000 | 0.4269 | 0.4330 |
| 0.2229 | 5.02 | 2500 | 0.4164 | 0.4073 |
| 0.188 | 6.02 | 3000 | 0.4717 | 0.4107 |
| 0.1739 | 7.03 | 3500 | 0.4306 | 0.4031 |
| 0.159 | 8.03 | 4000 | 0.4394 | 0.3993 |
| 0.1342 | 9.04 | 4500 | 0.4462 | 0.3904 |
| 0.1093 | 10.04 | 5000 | 0.4387 | 0.3759 |
| 0.1005 | 11.04 | 5500 | 0.5033 | 0.3847 |
| 0.0857 | 12.05 | 6000 | 0.4805 | 0.3876 |
| 0.0779 | 13.05 | 6500 | 0.5269 | 0.3810 |
| 0.072 | 14.06 | 7000 | 0.5109 | 0.3710 |
| 0.0641 | 15.06 | 7500 | 0.4865 | 0.3638 |
| 0.0584 | 16.06 | 8000 | 0.5041 | 0.3646 |
| 0.0552 | 17.07 | 8500 | 0.4987 | 0.3537 |
| 0.0535 | 18.07 | 9000 | 0.4947 | 0.3586 |
| 0.0475 | 19.08 | 9500 | 0.5237 | 0.3647 |
| 0.042 | 20.08 | 10000 | 0.5338 | 0.3561 |
| 0.0416 | 21.08 | 10500 | 0.5068 | 0.3483 |
| 0.0358 | 22.09 | 11000 | 0.5126 | 0.3532 |
| 0.0334 | 23.09 | 11500 | 0.5213 | 0.3536 |
| 0.0331 | 24.1 | 12000 | 0.5378 | 0.3496 |
| 0.03 | 25.1 | 12500 | 0.5167 | 0.3470 |
| 0.0254 | 26.1 | 13000 | 0.5245 | 0.3418 |
| 0.0233 | 27.11 | 13500 | 0.5393 | 0.3456 |
| 0.0232 | 28.11 | 14000 | 0.5279 | 0.3425 |
| 0.022 | 29.12 | 14500 | 0.5308 | 0.3428 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v3 | 77de3512121a6d01305204940b6d79d2a4d0118c | 2022-06-20T00:32:38.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v3 | 0 | null | transformers | 38,258 | ---
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v3
This model is a fine-tuned version of [gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v1](https://huggingface.co/gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v1) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4301
- Wer: 0.1633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1517 | 1.0 | 552 | 0.4301 | 0.1633 |
| 0.1309 | 2.0 | 1104 | 0.4348 | 0.1629 |
| 0.1237 | 3.0 | 1656 | 0.4611 | 0.1604 |
| 0.1056 | 4.0 | 2208 | 0.4541 | 0.1574 |
| 0.1236 | 5.0 | 2760 | 0.4669 | 0.1603 |
| 0.1118 | 6.0 | 3312 | 0.4640 | 0.1567 |
| 0.0916 | 7.0 | 3864 | 0.4678 | 0.1555 |
| 0.1 | 8.0 | 4416 | 0.4705 | 0.1550 |
| 0.1301 | 9.0 | 4968 | 0.4740 | 0.1551 |
| 0.0885 | 10.0 | 5520 | 0.4702 | 0.1546 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
lmqg/t5-large-squadshifts-vanilla-nyt | 6bfe0f8664bc1c1030a383ffee84639da5e9e45a | 2022-06-19T13:05:23.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-large-squadshifts-vanilla-nyt | 0 | null | transformers | 38,259 | Entry not found |
parinzee/mT5-small-thai-multiple-e2e-qg-numsep | 2ff65422437fc86a0ca2ca4969f669e823ca9e33 | 2022-06-20T03:21:19.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:agpl-3.0",
"autotrain_compatible"
] | text2text-generation | false | parinzee | null | parinzee/mT5-small-thai-multiple-e2e-qg-numsep | 0 | null | transformers | 38,260 | ---
license: agpl-3.0
---
|
lmqg/t5-base-squadshifts-vanilla-reddit | 2fa99f326e27b8cadfcda503b0780d24e9233798 | 2022-06-19T14:12:15.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-base-squadshifts-vanilla-reddit | 0 | null | transformers | 38,261 | Entry not found |
lmqg/t5-base-squadshifts-vanilla-amazon | ba68230c8e0b612a0a4e682a99a2eddd2421b3d9 | 2022-06-19T14:14:51.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-base-squadshifts-vanilla-amazon | 0 | null | transformers | 38,262 | Entry not found |
sasuke/opus-mt-en-ro-finetuned-en-to-ro | 1cf1587a47127940cdd84cb32693aa938ab2290f | 2022-06-20T01:17:24.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sasuke | null | sasuke/opus-mt-en-ro-finetuned-en-to-ro | 0 | null | transformers | 38,263 | Entry not found |
huggingtweets/aktualnecz-lidovky-respekt_cz | 5087d7ad3d038cd03f1d65c1377826454e80132a | 2022-06-19T17:46:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/aktualnecz-lidovky-respekt_cz | 0 | null | transformers | 38,264 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/869087268560134144/cn6Lujpu_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1496879672726110210/EFcjfPOD_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1415312701044232192/_2a0LBVd_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lidovky.cz & Aktuálně.cz & Týdeník Respekt</div>
<div style="text-align: center; font-size: 14px;">@aktualnecz-lidovky-respekt_cz</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lidovky.cz & Aktuálně.cz & Týdeník Respekt.
| Data | Lidovky.cz | Aktuálně.cz | Týdeník Respekt |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 3250 | 3250 |
| Retweets | 16 | 1284 | 1600 |
| Short tweets | 0 | 1 | 29 |
| Tweets kept | 3234 | 1965 | 1621 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2pw8532j/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aktualnecz-lidovky-respekt_cz's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3jss7bff) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3jss7bff/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/aktualnecz-lidovky-respekt_cz')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/soundersfc | 1709f93ea08f9f3c12231c0287c13701e9f183c9 | 2022-07-04T00:05:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/soundersfc | 0 | null | transformers | 38,265 | ---
language: en
thumbnail: http://www.huggingtweets.com/soundersfc/1656893134824/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1542935688026370048/DofQNu_P_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Seattle Sounders FC</div>
<div style="text-align: center; font-size: 14px;">@soundersfc</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Seattle Sounders FC.
| Data | Seattle Sounders FC |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 476 |
| Short tweets | 148 |
| Tweets kept | 2626 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/29216l8g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @soundersfc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/31kt4kvm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/31kt4kvm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/soundersfc')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
joshanashakya/codebert_sourcecode_nmt_ja2pn_50E_2e-05LR_16B_12E_12D | 6930a2ed624385ddb0c5aa4e5833b0a402ec782e | 2022-06-20T01:35:08.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | joshanashakya | null | joshanashakya/codebert_sourcecode_nmt_ja2pn_50E_2e-05LR_16B_12E_12D | 0 | null | transformers | 38,266 | Entry not found |
lmqg/t5-large-squadshifts-vanilla-reddit | 3b9e8d2da774cdf6f9677c51a69df399717ca73b | 2022-06-20T02:04:43.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-large-squadshifts-vanilla-reddit | 0 | null | transformers | 38,267 | Entry not found |
joshanashakya/codebert_sourcecode_nmt_pn2ja_50E_2e-05LR_16B_6E_6D | f4374a8a10e361a1c0b0f45520f2010916f8a4c4 | 2022-06-20T02:26:32.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | joshanashakya | null | joshanashakya/codebert_sourcecode_nmt_pn2ja_50E_2e-05LR_16B_6E_6D | 0 | null | transformers | 38,268 | Entry not found |
joshanashakya/codebert_sourcecode_nmt_ja2pn_50E_2e-05LR_16B_6E_6D | 3d1b9aad7ffbedc165139c74af1a33520d3cfcfc | 2022-06-20T02:29:12.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | joshanashakya | null | joshanashakya/codebert_sourcecode_nmt_ja2pn_50E_2e-05LR_16B_6E_6D | 0 | null | transformers | 38,269 | Entry not found |
huggingtweets/bartoszmilewski | 379abd60f3fb37f770b50747853042aaf8723d73 | 2022-06-20T02:35:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/bartoszmilewski | 0 | null | transformers | 38,270 | ---
language: en
thumbnail: http://www.huggingtweets.com/bartoszmilewski/1655692518288/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1000136690/IslandBartosz_400x400.JPG')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bartosz Milewski</div>
<div style="text-align: center; font-size: 14px;">@bartoszmilewski</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bartosz Milewski.
| Data | Bartosz Milewski |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 79 |
| Short tweets | 778 |
| Tweets kept | 2391 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2689vaqz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bartoszmilewski's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1f1jpc3z) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1f1jpc3z/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bartoszmilewski')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sotoespinosa32/dummy-model | dc4204a29384bc30c703068b72daf43e46d7ecb0 | 2022-06-20T02:49:32.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sotoespinosa32 | null | sotoespinosa32/dummy-model | 0 | null | transformers | 38,271 | Entry not found |
raesti/opus-mt-en-ro-finetuned-en-to-ro | bbd76066927ef823704c57d4a28809a26e41bf80 | 2022-06-20T04:33:54.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | raesti | null | raesti/opus-mt-en-ro-finetuned-en-to-ro | 0 | null | transformers | 38,272 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 28.1507
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2886
- Bleu: 28.1507
- Gen Len: 34.1136
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7437 | 1.0 | 38145 | 1.2886 | 28.1507 | 34.1136 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
joshanashakya/codebert_sourcecode_nmt_ja2pn_100E_2e-05LR_16B_12E_12D | 753728dcd3a1f042d06f5f084123f298d92ccf51 | 2022-06-20T03:41:36.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | joshanashakya | null | joshanashakya/codebert_sourcecode_nmt_ja2pn_100E_2e-05LR_16B_12E_12D | 0 | null | transformers | 38,273 | Entry not found |
joshanashakya/codebert_sourcecode_nmt_ja2pn_100E_2e-05LR_16B_6E_6D | 138d182e39c04319ea7ead5b9a384b8902750e8e | 2022-06-20T06:24:28.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | joshanashakya | null | joshanashakya/codebert_sourcecode_nmt_ja2pn_100E_2e-05LR_16B_6E_6D | 0 | null | transformers | 38,274 | Entry not found |
taprosoft/layoutxlm-no-visual | b9a4afc4288f7da783e7ea72e944369efbe751d3 | 2022-06-20T07:28:01.000Z | [
"pytorch",
"layoutlmv2",
"transformers",
"license:apache-2.0"
] | null | false | taprosoft | null | taprosoft/layoutxlm-no-visual | 0 | null | transformers | 38,275 | ---
license: apache-2.0
---
|
jacobbieker/dgmr | 8a811fe7b2eb077cf5de69945e3c387dab9bf386 | 2022-06-20T07:43:41.000Z | [
"pytorch",
"transformers",
"nowcasting",
"forecasting",
"timeseries",
"remote-sensing",
"gan",
"license:mit"
] | null | false | jacobbieker | null | jacobbieker/dgmr | 0 | 1 | transformers | 38,276 | ---
license: mit
tags:
- nowcasting
- forecasting
- timeseries
- remote-sensing
- gan
---
# DGMR
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
|
jacobbieker/dgmr-sampler | fb216ce042aafe3ae157f34cb0ec46d67fda3fc0 | 2022-06-20T07:50:50.000Z | [
"pytorch"
] | null | false | jacobbieker | null | jacobbieker/dgmr-sampler | 0 | null | null | 38,277 | Entry not found |
jacobbieker/dgmr-discriminator | d631588c2cb6d5634c6cbd927bd22e2e0e64a379 | 2022-06-20T07:53:59.000Z | [
"pytorch"
] | null | false | jacobbieker | null | jacobbieker/dgmr-discriminator | 0 | null | null | 38,278 | Entry not found |
jacobbieker/dgmr-latent-conditioning-stack | e4442a9ab5ab4c30fafa7ce388d98691f8bb0f17 | 2022-06-20T07:59:02.000Z | [
"pytorch"
] | null | false | jacobbieker | null | jacobbieker/dgmr-latent-conditioning-stack | 0 | null | null | 38,279 | Entry not found |
jacobbieker/dgmr-context-conditioning-stack | c2ee62b4a5ec36a9a6966fbe2e96fd2e7fcec121 | 2022-06-20T08:00:11.000Z | [
"pytorch"
] | null | false | jacobbieker | null | jacobbieker/dgmr-context-conditioning-stack | 0 | null | null | 38,280 | Entry not found |
sanchit-gandhi/wav2vec2-ctc-earnings22-baseline | c35a5f92d2d9b9bedec08dcbbbd30f41e95ac175 | 2022-06-20T12:12:32.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-ctc-earnings22-baseline | 0 | null | transformers | 38,281 | Unrolled PT and FX weights of https://huggingface.co/sanchit-gandhi/flax-wav2vec2-ctc-earnings22-baseline/tree/main |
Sampson2022/test | e8c530e049b610ed476b5b3ac084bcb5a417634d | 2022-06-22T12:20:37.000Z | [
"pytorch"
] | null | false | Sampson2022 | null | Sampson2022/test | 0 | null | null | 38,282 | Entry not found |
lmqg/t5-large-squadshifts-vanilla-amazon | c4bb6c88290003e4db06fde28efb4355fe631c35 | 2022-06-20T13:44:02.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-large-squadshifts-vanilla-amazon | 0 | null | transformers | 38,283 | Entry not found |
lmqg/t5-large-subjqa-vanilla-books | d957daabe50bbf83ff53d4975fa5acd584d92652 | 2022-06-20T15:12:53.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-large-subjqa-vanilla-books | 0 | null | transformers | 38,284 | Entry not found |
varie/poetry-generation-nextline-mbart-all-fi-multi | 1da3dd877fa79f51fa3c3a3bb06ef1c9ede761c8 | 2022-07-15T16:09:57.000Z | [
"pytorch"
] | null | false | varie | null | varie/poetry-generation-nextline-mbart-all-fi-multi | 0 | null | null | 38,285 | # poetry-generation-nextline-mbart-all-fi-multi
* `nextline`: generates a poem line from previous line(s)
* `mbart`: base model is [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
* `all`: trained on data from Project Gutenberg, Wikisource, Poesia publishing house
* `fi`: Finnish language
* `multi`: uses first, second, and third last lines as input for generation |
furyhawk/xlm-roberta-base-finetuned-panx-de | e5a43e4ff69b04d147491e31fbf6684a98856f85 | 2022-06-21T03:44:32.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | furyhawk | null | furyhawk/xlm-roberta-base-finetuned-panx-de | 0 | null | transformers | 38,286 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.865423959990907
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1360
- F1: 0.8654
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2552 | 1.0 | 525 | 0.1621 | 0.8216 |
| 0.1292 | 2.0 | 1050 | 0.1409 | 0.8445 |
| 0.084 | 3.0 | 1575 | 0.1360 | 0.8654 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
huggingtweets/dougjballoon | 4878057c95fddefe7f06b13118ead8de760ccca1 | 2022-06-20T16:22:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/dougjballoon | 0 | null | transformers | 38,287 | ---
language: en
thumbnail: http://www.huggingtweets.com/dougjballoon/1655742171463/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1449034383420182531/Ava9u8mK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">New York Times Pitchbot</div>
<div style="text-align: center; font-size: 14px;">@dougjballoon</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from New York Times Pitchbot.
| Data | New York Times Pitchbot |
| --- | --- |
| Tweets downloaded | 3242 |
| Retweets | 471 |
| Short tweets | 214 |
| Tweets kept | 2557 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yayozkb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dougjballoon's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3sese3rg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3sese3rg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dougjballoon')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ornil1/distilbert-base-uncased-finetuned-imdb | 8d2d069f50775acfe696f455e634e405194e7263 | 2022-06-20T19:19:34.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | ornil1 | null | ornil1/distilbert-base-uncased-finetuned-imdb | 0 | null | transformers | 38,288 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
mcimmy/DialoGPT-small-bob | e20a67712342911d217de2344e2cd628b186a9d1 | 2022-06-20T20:25:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | mcimmy | null | mcimmy/DialoGPT-small-bob | 0 | null | transformers | 38,289 | ---
tags:
- conversational
---
# Spongebob DialoGPT |
parinzee/mT5-small-thai-multiple-e2e-qg-aug-numsep | d06bd465d81eb9233576a65d28aef6aefa8c0dbb | 2022-06-21T05:47:30.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:agpl-3.0",
"autotrain_compatible"
] | text2text-generation | false | parinzee | null | parinzee/mT5-small-thai-multiple-e2e-qg-aug-numsep | 0 | null | transformers | 38,290 | ---
license: agpl-3.0
---
|
gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v4 | 6a26f8b6eae5fc11bda226637f4aab6494058167 | 2022-06-22T02:22:03.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v4 | 0 | 1 | transformers | 38,291 | ---
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v4
This model is a fine-tuned version of [gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v2](https://huggingface.co/gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v2) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4231
- Wer: 0.1597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1335 | 1.0 | 138 | 0.4256 | 0.1605 |
| 0.1288 | 2.0 | 276 | 0.4234 | 0.1602 |
| 0.1278 | 3.0 | 414 | 0.4243 | 0.1597 |
| 0.1345 | 4.0 | 552 | 0.4231 | 0.1597 |
| 0.1344 | 5.0 | 690 | 0.4246 | 0.1597 |
| 0.1237 | 6.0 | 828 | 0.4279 | 0.1595 |
| 0.1109 | 7.0 | 966 | 0.4354 | 0.1573 |
| 0.1247 | 8.0 | 1104 | 0.4318 | 0.1570 |
| 0.1372 | 9.0 | 1242 | 0.4341 | 0.1573 |
| 0.1256 | 10.0 | 1380 | 0.4328 | 0.1575 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
huggingtweets/coinmamba | 74a27714e8f24f8b10c3b32e5f66d75094a4a985 | 2022-06-21T10:44:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/coinmamba | 0 | null | transformers | 38,292 | ---
language: en
thumbnail: http://www.huggingtweets.com/coinmamba/1655808256840/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1523748536168464384/feZm38Pe_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">CoinMamba</div>
<div style="text-align: center; font-size: 14px;">@coinmamba</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from CoinMamba.
| Data | CoinMamba |
| --- | --- |
| Tweets downloaded | 3243 |
| Retweets | 41 |
| Short tweets | 608 |
| Tweets kept | 2594 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2as2s722/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @coinmamba's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1zewdmar) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1zewdmar/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/coinmamba')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kravchenko/uk-mt5-small-gec-synthetic | d76b4d198adc1fa2af7eff940cf9454c49591064 | 2022-06-21T12:59:30.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | kravchenko | null | kravchenko/uk-mt5-small-gec-synthetic | 0 | null | transformers | 38,293 | Entry not found |
nielsr/test-flair-model | 218b67f0c9a460213c160db5cc35f21e8ac30d7c | 2022-06-21T13:06:55.000Z | [
"pytorch"
] | null | false | nielsr | null | nielsr/test-flair-model | 0 | null | null | 38,294 | Entry not found |
lmqg/t5-large-subjqa-vanilla-movies | c464b7c98e6a245c7a67ceab115e737c51250ac6 | 2022-06-21T14:12:14.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-large-subjqa-vanilla-movies | 0 | null | transformers | 38,295 | Entry not found |
kravchenko/uk-mt5-small-gec-synthetic-2 | 8b7f9b414593fcfe49bc653b9ff4d0ddac1c3e89 | 2022-06-21T15:53:24.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | kravchenko | null | kravchenko/uk-mt5-small-gec-synthetic-2 | 0 | null | transformers | 38,296 | Entry not found |
lmqg/t5-large-subjqa-vanilla-restaurants | 9527a00b24a547a65005d5fd22bdb40b030a824e | 2022-06-21T15:32:37.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-large-subjqa-vanilla-restaurants | 0 | null | transformers | 38,297 | Entry not found |
lmqg/t5-large-subjqa-vanilla-tripadvisor | 8aba68a9e689c073fe59fb2e1d891f3a57d320df | 2022-06-21T17:21:22.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-large-subjqa-vanilla-tripadvisor | 0 | null | transformers | 38,298 | Entry not found |
lmqg/bart-large-squadshifts-new_wiki | b1e6a532040f53def77ac40a7bf2dd34bc7e5ac3 | 2022-06-22T10:45:16.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/bart-large-squadshifts-new_wiki | 0 | null | transformers | 38,299 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.