modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingtweets/coffee__burger | ca8f5e0c262ae77d1b9198007589167dd5fcb932 | 2022-03-01T09:06:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/coffee__burger | 0 | null | transformers | 36,400 | ---
language: en
thumbnail: http://www.huggingtweets.com/coffee__burger/1646125569654/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/794725967948181506/Zn4x_F6i_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Coffee Burger</div>
<div style="text-align: center; font-size: 14px;">@coffee__burger</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Coffee Burger.
| Data | Coffee Burger |
| --- | --- |
| Tweets downloaded | 2471 |
| Retweets | 525 |
| Short tweets | 337 |
| Tweets kept | 1609 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ad82qis/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @coffee__burger's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1kxzm2oz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1kxzm2oz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/coffee__burger')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/berniesanders-cnn-dril | 575e2ad494733509ce6742c0d8e210c974e0ceca | 2022-03-01T09:43:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/berniesanders-cnn-dril | 0 | null | transformers | 36,401 | ---
language: en
thumbnail: http://www.huggingtweets.com/berniesanders-cnn-dril/1646127802129/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1097820307388334080/9ddg5F6v_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1278259160644227073/MfCyF7CG_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bernie Sanders & wint & CNN</div>
<div style="text-align: center; font-size: 14px;">@berniesanders-cnn-dril</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bernie Sanders & wint & CNN.
| Data | Bernie Sanders | wint | CNN |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 3229 | 3250 |
| Retweets | 429 | 473 | 30 |
| Short tweets | 10 | 300 | 6 |
| Tweets kept | 2811 | 2456 | 3214 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yapgpjj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @berniesanders-cnn-dril's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1hmm651a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1hmm651a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/berniesanders-cnn-dril')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/berniesanders-dril | 33afaa0d841cd7a3b56fd8e491ec80a255ada2b0 | 2022-03-01T10:13:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/berniesanders-dril | 0 | null | transformers | 36,402 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1097820307388334080/9ddg5F6v_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & Bernie Sanders</div>
<div style="text-align: center; font-size: 14px;">@berniesanders-dril</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & Bernie Sanders.
| Data | wint | Bernie Sanders |
| --- | --- | --- |
| Tweets downloaded | 3229 | 3250 |
| Retweets | 473 | 429 |
| Short tweets | 300 | 10 |
| Tweets kept | 2456 | 2811 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/yw6378l1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @berniesanders-dril's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3pydufi9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3pydufi9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/berniesanders-dril')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/janieclone | 1a6d8a7aa7fd819487b7d4d248791de48524737a | 2022-07-13T17:02:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/janieclone | 0 | null | transformers | 36,403 | ---
language: en
thumbnail: http://www.huggingtweets.com/janieclone/1657731718034/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1536389142287892481/N6kCwACw_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Columbine Janie</div>
<div style="text-align: center; font-size: 14px;">@janieclone</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Columbine Janie.
| Data | Columbine Janie |
| --- | --- |
| Tweets downloaded | 2409 |
| Retweets | 1025 |
| Short tweets | 332 |
| Tweets kept | 1052 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1jcqf2hu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @janieclone's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/u7quubhw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/u7quubhw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/janieclone')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
xdmason/pretrainedCas | d66136319fbd11c6544dad149765829297facd60 | 2022-03-02T00:58:13.000Z | [
"pytorch",
"gpt2",
"transformers",
"conversational"
] | conversational | false | xdmason | null | xdmason/pretrainedCas | 0 | null | transformers | 36,404 | ---
tags:
- conversational
---
# pretrained Cas Model |
jiobiala24/wav2vec2-base-checkpoint-14 | 9031ee79209a12fa11467679412f99eefbfdd2af | 2022-03-02T15:13:04.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-checkpoint-14 | 0 | null | transformers | 36,405 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-14
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-13](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-13) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2822
- Wer: 0.4068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1996 | 1.59 | 1000 | 0.7181 | 0.4079 |
| 0.1543 | 3.17 | 2000 | 0.7735 | 0.4113 |
| 0.1171 | 4.76 | 3000 | 0.8152 | 0.4045 |
| 0.0969 | 6.35 | 4000 | 0.8575 | 0.4142 |
| 0.082 | 7.94 | 5000 | 0.9005 | 0.4124 |
| 0.074 | 9.52 | 6000 | 0.9232 | 0.4151 |
| 0.0653 | 11.11 | 7000 | 0.9680 | 0.4223 |
| 0.0587 | 12.7 | 8000 | 1.0633 | 0.4232 |
| 0.0551 | 14.29 | 9000 | 1.0875 | 0.4171 |
| 0.0498 | 15.87 | 10000 | 1.0281 | 0.4105 |
| 0.0443 | 17.46 | 11000 | 1.2164 | 0.4274 |
| 0.0421 | 19.05 | 12000 | 1.1868 | 0.4191 |
| 0.0366 | 20.63 | 13000 | 1.1678 | 0.4173 |
| 0.0366 | 22.22 | 14000 | 1.2444 | 0.4187 |
| 0.0346 | 23.81 | 15000 | 1.2042 | 0.4169 |
| 0.0316 | 25.4 | 16000 | 1.3019 | 0.4127 |
| 0.0296 | 26.98 | 17000 | 1.2001 | 0.4081 |
| 0.0281 | 28.57 | 18000 | 1.2822 | 0.4068 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
prk/roberta-base-squad2-finetuned-squad | 15b151de471fcc120a3fecf27c4d2891c0b01336 | 2022-03-03T10:26:14.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | prk | null | prk/roberta-base-squad2-finetuned-squad | 0 | null | transformers | 36,406 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: roberta-base-squad2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad2-finetuned-squad
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on a custom dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 8 | 0.1894 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
nimrah/wav2vec2-large-xls-r-300m-turkish-colab | 0f3b3b889009da84a585add22e109e41053b2e46 | 2022-03-02T08:18:47.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nimrah | null | nimrah/wav2vec2-large-xls-r-300m-turkish-colab | 0 | null | transformers | 36,407 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2970
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 6.1837 | 3.67 | 400 | 3.2970 | 1.0 |
| 0.0 | 7.34 | 800 | 3.2970 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
facebook/maskformer-swin-tiny-ade | 80bb6d935ed12f2f2dfabbf44772a33821aac9f0 | 2022-04-04T16:02:00.000Z | [
"pytorch",
"maskformer",
"dataset:ade-20k",
"arxiv:2107.06278",
"transformers",
"vision",
"image-segmentatiom",
"license:apache-2.0"
] | null | false | facebook | null | facebook/maskformer-swin-tiny-ade | 0 | null | transformers | 36,408 | ---
license: apache-2.0
tags:
- vision
- image-segmentatiom
datasets:
- ade-20k
widget:
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg
example_title: House
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg
example_title: Castle
---
# Mask
Mask model trained on ade-20k. It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing Mask did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses semantic segmentation with a mask classification paradigm instead.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=maskformer) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-ade")
>>> inputs = feature_extractor(images=image, return_tensors="pt")
>>> model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade")
>>> outputs = model(**inputs)
>>> # model predicts class_queries_logits of shape `(batch_size, num_queries)`
>>> # and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
>>> class_queries_logits = outputs.class_queries_logits
>>> masks_queries_logits = outputs.masks_queries_logits
>>> # you can pass them to feature_extractor for postprocessing
>>> output = feature_extractor.post_process_segmentation(outputs)
>>> output = feature_extractor.post_process_semantic_segmentation(outputs)
>>> output = feature_extractor.post_process_panoptic_segmentation(outputs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). |
nimrah/wav2vec2-large-xls-r-300m-turkish-colab-4 | d597872df47dad4f9b80e88d855689c1929a9f4f | 2022-03-02T15:54:07.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nimrah | null | nimrah/wav2vec2-large-xls-r-300m-turkish-colab-4 | 0 | null | transformers | 36,409 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab-4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
mcdzwil/distilbert-base-uncased-finetuned-ner | bb59e31745413ef43c63e8461b4a671649fa2e70 | 2022-03-02T16:35:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | mcdzwil | null | mcdzwil/distilbert-base-uncased-finetuned-ner | 0 | null | transformers | 36,410 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1830
- Precision: 0.9171
- Recall: 0.7099
- F1: 0.8003
- Accuracy: 0.9316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 48 | 0.2903 | 0.7952 | 0.7063 | 0.7481 | 0.9136 |
| No log | 2.0 | 96 | 0.2015 | 0.9154 | 0.7075 | 0.7981 | 0.9298 |
| No log | 3.0 | 144 | 0.1830 | 0.9171 | 0.7099 | 0.8003 | 0.9316 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
repro-rights-amicus-briefs/legal-bert-base-uncased-finetuned-RRamicus | af97cbc05339b4c75862c20d8bb04f499c610741 | 2022-03-03T20:21:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | repro-rights-amicus-briefs | null | repro-rights-amicus-briefs/legal-bert-base-uncased-finetuned-RRamicus | 0 | null | transformers | 36,411 | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: legal-bert-base-uncased-finetuned-RRamicus
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal-bert-base-uncased-finetuned-RRamicus
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 928
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.021 | 1.0 | 1118 | 1.3393 |
| 1.2272 | 2.0 | 2236 | 1.2612 |
| 1.2467 | 3.0 | 3354 | 1.2403 |
| 1.2149 | 4.0 | 4472 | 1.2276 |
| 1.1855 | 5.0 | 5590 | 1.2101 |
| 1.1674 | 6.0 | 6708 | 1.2020 |
| 1.1508 | 7.0 | 7826 | 1.1893 |
| 1.1386 | 8.0 | 8944 | 1.1870 |
| 1.129 | 9.0 | 10062 | 1.1794 |
| 1.1193 | 10.0 | 11180 | 1.1759 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
huggingtweets/xqc | 3b78597ad334ae43c3f557b9daef464464345613 | 2022-03-03T04:24:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/xqc | 0 | null | transformers | 36,412 | ---
language: en
thumbnail: http://www.huggingtweets.com/xqc/1646281436978/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1188911868863221772/fpcyKuIW_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">xQc</div>
<div style="text-align: center; font-size: 14px;">@xqc</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from xQc.
| Data | xQc |
| --- | --- |
| Tweets downloaded | 3203 |
| Retweets | 128 |
| Short tweets | 406 |
| Tweets kept | 2669 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1w7gqt7r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @xqc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3j2p63io) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3j2p63io/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/xqc')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mmaguero/gn-bert-base-cased | 9d03ff9190236e4b6732bb87d1b9e67f875a2f38 | 2022-03-06T08:05:18.000Z | [
"pytorch",
"bert",
"fill-mask",
"gn",
"dataset:wikipedia",
"dataset:wiktionary",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | mmaguero | null | mmaguero/gn-bert-base-cased | 0 | null | transformers | 36,413 | ---
language: gn
license: mit
datasets:
- wikipedia
- wiktionary
widget:
- text: "Paraguay ha'e peteĩ táva oĩva [MASK] retãme "
---
# BERT-i-base-cased (gnBERT-base-cased)
A pre-trained BERT model for **Guarani** (12 layers, cased). Trained on Wikipedia + Wiktionary (~800K tokens).
|
tiot07/wav2vec2-base-timit-demo-colab-large | b9b08abfe84a6bad1ed2d66445e05b24968caaf1 | 2022-03-04T09:34:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | tiot07 | null | tiot07/wav2vec2-base-timit-demo-colab-large | 0 | null | transformers | 36,414 | Entry not found |
nimrah/wav2vec2-large-xls-r-300m-hindi_home-colab-11 | a918b00fa991213a5a23a5c20448c006a994fe27 | 2022-03-04T16:41:25.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nimrah | null | nimrah/wav2vec2-large-xls-r-300m-hindi_home-colab-11 | 0 | null | transformers | 36,415 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hindi_home-colab-11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi_home-colab-11
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7649
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.03
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.5971 | 44.43 | 400 | 3.7649 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
nimrah/wav2vec2-large-xls-r-300m-turkish-colab-9 | 8935c0128bfdaed4737e783700cfdd2d4db85325 | 2022-03-04T18:24:21.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nimrah | null | nimrah/wav2vec2-large-xls-r-300m-turkish-colab-9 | 0 | null | transformers | 36,416 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab-9
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.03
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
petrichorRainbow/mrf-T5 | 403dc9990544b8fd803c2cbc0d4690c4bdd5c6f8 | 2022-03-07T18:59:39.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | petrichorRainbow | null | petrichorRainbow/mrf-T5 | 0 | null | transformers | 36,417 | Entry not found |
infinitylyj/DialogGPT-small-rick | a76452c69f5a4a0c6c1bf20e8dd235b3c6571895 | 2022-03-05T06:55:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | infinitylyj | null | infinitylyj/DialogGPT-small-rick | 0 | null | transformers | 36,418 | ---
tags:
- conversational
---
# Rick DialogGPT Model |
naam/xlm-roberta-base-finetuned-panx-de | 9674c14b9cfbb6f7c0c97de5b204e4994ca8342a | 2022-03-05T13:48:33.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | naam | null | naam/xlm-roberta-base-finetuned-panx-de | 0 | null | transformers | 36,419 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8594910162670748
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1348
- F1: 0.8595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2556 | 1.0 | 525 | 0.1629 | 0.8218 |
| 0.1309 | 2.0 | 1050 | 0.1378 | 0.8522 |
| 0.0812 | 3.0 | 1575 | 0.1348 | 0.8595 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
infinitylyj/DialogGPT-medium-general | a4c065d70fc00ceeca9265886b46876924b03975 | 2022-03-05T13:45:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | infinitylyj | null | infinitylyj/DialogGPT-medium-general | 0 | null | transformers | 36,420 | ---
tags:
- conversational
---
# General DialogGPT Model
|
nimrah/my-wav2vec2-base-timit-demo-colab-my | 6d864f73896c0afcd833cb6d1fb787c50ab66c6a | 2022-03-05T17:06:37.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nimrah | null | nimrah/my-wav2vec2-base-timit-demo-colab-my | 0 | null | transformers | 36,421 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my-wav2vec2-base-timit-demo-colab-my
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-wav2vec2-base-timit-demo-colab-my
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5569
- Wer: 0.3481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4083 | 4.0 | 500 | 1.0932 | 0.7510 |
| 0.5536 | 8.0 | 1000 | 0.4965 | 0.4819 |
| 0.2242 | 12.0 | 1500 | 0.4779 | 0.4077 |
| 0.1249 | 16.0 | 2000 | 0.4921 | 0.4006 |
| 0.0844 | 20.0 | 2500 | 0.4809 | 0.3753 |
| 0.0613 | 24.0 | 3000 | 0.5307 | 0.3680 |
| 0.0459 | 28.0 | 3500 | 0.5569 | 0.3481 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
huggingtweets/ragnar_furup | de6725c9b840c44248a33362e3898e8a6f894ac2 | 2022-03-05T18:34:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/ragnar_furup | 0 | null | transformers | 36,422 | ---
language: en
thumbnail: http://www.huggingtweets.com/ragnar_furup/1646505291174/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1500138558765608969/Qgc4pMtC_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">R4 G4.mp3🌻</div>
<div style="text-align: center; font-size: 14px;">@ragnar_furup</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from R4 G4.mp3🌻.
| Data | R4 G4.mp3🌻 |
| --- | --- |
| Tweets downloaded | 1695 |
| Retweets | 889 |
| Short tweets | 104 |
| Tweets kept | 702 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3eum19q4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ragnar_furup's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/30kqu5u4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/30kqu5u4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ragnar_furup')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sunitha/CV_Merge_DS | a17c761d54f9a8c00f9732197cab9ff97a9f2113 | 2022-03-06T05:09:45.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | sunitha | null | sunitha/CV_Merge_DS | 0 | null | transformers | 36,423 | Entry not found |
lilitket/wav2vec2-large-xls-r-300m-hy-colab | 3a2b5dd220468147023c6a5ba666e2090e5e558d | 2022-03-06T10:17:25.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/wav2vec2-large-xls-r-300m-hy-colab | 0 | null | transformers | 36,424 | Entry not found |
lilitket/wav2vec2-large-xls-r-300m-hypy-colab | 6182d8179eb267e89868912ee616001e1af834d1 | 2022-03-09T18:55:56.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/wav2vec2-large-xls-r-300m-hypy-colab | 0 | null | transformers | 36,425 | Entry not found |
osanseviero/xlm-roberta-base-finetuned-panx-de-fr | 5910b67637bec88e50820f01988dbd4109895377 | 2022-03-06T21:30:10.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | osanseviero | null | osanseviero/xlm-roberta-base-finetuned-panx-de-fr | 0 | null | transformers | 36,426 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1754
- F1: 0.8616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2815 | 1.0 | 1430 | 0.2079 | 0.8067 |
| 0.1521 | 2.0 | 2860 | 0.1759 | 0.8525 |
| 0.093 | 3.0 | 4290 | 0.1754 | 0.8616 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.18.0
- Tokenizers 0.10.3
|
tau/fewsion_debug | 2f56b0dc9e7a8f777e016c69870eacb124be50b3 | 2022-03-07T10:56:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/fewsion_debug | 0 | null | transformers | 36,427 | Entry not found |
voidful/speechmix_eed_fixed | f87da2b979118fe8d3a984f8c3cd72ffceddec4a | 2022-03-07T14:17:04.000Z | [
"pytorch"
] | null | false | voidful | null | voidful/speechmix_eed_fixed | 0 | null | null | 36,428 | Entry not found |
vocab-transformers/msmarco-distilbert-custom_word2vec256k | 36e2bd2647762004a73e95f38f9aef9e03bfe696 | 2022-03-07T14:56:18.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | vocab-transformers | null | vocab-transformers/msmarco-distilbert-custom_word2vec256k | 0 | null | transformers | 36,429 | Entry not found |
peggyhuang/finetune-bert-base-v3 | f4d4cda6123bb12e088e0192fc5830ea4a001262 | 2022-03-07T18:23:42.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | peggyhuang | null | peggyhuang/finetune-bert-base-v3 | 0 | null | transformers | 36,430 | Entry not found |
rockmiin/QMSum-dpr-query-encoder | a402f2c77483d5c7429729ea080c46c2293c2759 | 2022-03-08T02:00:39.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | rockmiin | null | rockmiin/QMSum-dpr-query-encoder | 0 | null | transformers | 36,431 | Entry not found |
rockmiin/QMSum-dpr-passage-encoder | e35ac25b89869d432695fca742ef6c156b963aa4 | 2022-03-08T02:09:39.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | rockmiin | null | rockmiin/QMSum-dpr-passage-encoder | 0 | null | transformers | 36,432 | Entry not found |
oskrmiguel/t5-small-finetuned-es-to-pt | a5fdfeb64e1e0fc900c6aba6b0215c3b99ee484a | 2022-03-08T03:15:16.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:tatoeba",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | oskrmiguel | null | oskrmiguel/t5-small-finetuned-es-to-pt | 0 | null | transformers | 36,433 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tatoeba
metrics:
- bleu
model-index:
- name: t5-small-finetuned-es-to-pt
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: tatoeba
type: tatoeba
args: es-pt
metrics:
- name: Bleu
type: bleu
value: 15.0473
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-es-to-pt
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the tatoeba dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5557
- Bleu: 15.0473
- Gen Len: 15.8693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 2.2027 | 1.0 | 1907 | 1.7884 | 11.6192 | 15.8829 |
| 1.9296 | 2.0 | 3814 | 1.6034 | 14.201 | 15.8935 |
| 1.8364 | 3.0 | 5721 | 1.5557 | 15.0473 | 15.8693 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
huggingtweets/fitdollar | 7e2d3f0f7735b472bcb1fc1dc8d60078fdfa8bac | 2022-03-08T05:18:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/fitdollar | 0 | null | transformers | 36,434 | ---
language: en
thumbnail: http://www.huggingtweets.com/fitdollar/1646716677087/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1421952831796350976/rFuw5k2v_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Fit$</div>
<div style="text-align: center; font-size: 14px;">@fitdollar</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Fit$.
| Data | Fit$ |
| --- | --- |
| Tweets downloaded | 1235 |
| Retweets | 139 |
| Short tweets | 219 |
| Tweets kept | 877 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1nxpnpfh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fitdollar's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3f78vjfv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3f78vjfv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fitdollar')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jiobiala24/wav2vec2-base-cv-10000 | ca850d61e9bd27a5d5042ab2b1bc431a266a2549 | 2022-03-08T13:08:35.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-cv-10000 | 0 | null | transformers | 36,435 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-cv-10000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-cv-10000
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-cv](https://huggingface.co/jiobiala24/wav2vec2-base-cv) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3393
- Wer: 0.3684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.4243 | 1.6 | 1000 | 0.7742 | 0.4210 |
| 0.3636 | 3.2 | 2000 | 0.8621 | 0.4229 |
| 0.2638 | 4.8 | 3000 | 0.9328 | 0.4094 |
| 0.2273 | 6.4 | 4000 | 0.9556 | 0.4087 |
| 0.187 | 8.0 | 5000 | 0.9093 | 0.4019 |
| 0.1593 | 9.6 | 6000 | 0.9842 | 0.4029 |
| 0.1362 | 11.2 | 7000 | 1.0651 | 0.4077 |
| 0.1125 | 12.8 | 8000 | 1.0550 | 0.3959 |
| 0.103 | 14.4 | 9000 | 1.1919 | 0.4002 |
| 0.0948 | 16.0 | 10000 | 1.1901 | 0.3983 |
| 0.0791 | 17.6 | 11000 | 1.1091 | 0.3860 |
| 0.0703 | 19.2 | 12000 | 1.2823 | 0.3904 |
| 0.0641 | 20.8 | 13000 | 1.2625 | 0.3817 |
| 0.057 | 22.4 | 14000 | 1.2821 | 0.3776 |
| 0.0546 | 24.0 | 15000 | 1.2975 | 0.3770 |
| 0.0457 | 25.6 | 16000 | 1.2998 | 0.3714 |
| 0.0433 | 27.2 | 17000 | 1.3574 | 0.3721 |
| 0.0423 | 28.8 | 18000 | 1.3393 | 0.3684 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
kevinjesse/roberta-MT4TS | 348c5b28ff4ffd206d59c22b1073a0b2d697830d | 2022-03-09T20:20:41.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | kevinjesse | null | kevinjesse/roberta-MT4TS | 0 | null | transformers | 36,436 | Entry not found |
kevinjesse/polygot-MT4TS | e89b517f46214d5b8869c2ac71591f63d18ee042 | 2022-03-09T19:31:30.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | kevinjesse | null | kevinjesse/polygot-MT4TS | 0 | null | transformers | 36,437 | Entry not found |
kevinjesse/graphpolygot-MT4TS | 9263bb0cc9133c14037baed784b2657af7288385 | 2022-03-09T18:44:52.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | kevinjesse | null | kevinjesse/graphpolygot-MT4TS | 0 | null | transformers | 36,438 | Entry not found |
huggingtweets/betonkoepfin-littlehorney-plusbibi1 | 3900535a143cbe4e05ce6dfb014b374fddc64f90 | 2022-03-08T07:46:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/betonkoepfin-littlehorney-plusbibi1 | 0 | null | transformers | 36,439 | ---
language: en
thumbnail: http://www.huggingtweets.com/betonkoepfin-littlehorney-plusbibi1/1646725560421/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1386970823681052680/oA_4HBKl_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1425205160578588673/LBMG1HOO_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1500892464772751365/6uhqt-Jx_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bibi und Anna & Betty S. & Vanny_Bunny™</div>
<div style="text-align: center; font-size: 14px;">@betonkoepfin-littlehorney-plusbibi1</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bibi und Anna & Betty S. & Vanny_Bunny™.
| Data | Bibi und Anna | Betty S. | Vanny_Bunny™ |
| --- | --- | --- | --- |
| Tweets downloaded | 1818 | 3243 | 3185 |
| Retweets | 9 | 213 | 494 |
| Short tweets | 341 | 552 | 339 |
| Tweets kept | 1468 | 2478 | 2352 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3nxb6yoh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @betonkoepfin-littlehorney-plusbibi1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/365gy60z) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/365gy60z/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/betonkoepfin-littlehorney-plusbibi1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kamilali/distilbert-base-uncased-finetuned-custom | eecdf367580c719ace3227bdd6ee80f8c7ec8446 | 2022-03-08T08:57:07.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | kamilali | null | kamilali/distilbert-base-uncased-finetuned-custom | 0 | null | transformers | 36,440 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-custom
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-custom
This model is a fine-tuned version of [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 368 | 1.1128 |
| 2.1622 | 2.0 | 736 | 0.8494 |
| 1.2688 | 3.0 | 1104 | 0.7808 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
openclimatefix/graph-weather-forecaster-0.25deg | 9343fc4999c12c6b335d77eb2ab41a652b22eb05 | 2022-03-09T16:19:40.000Z | [
"pytorch"
] | null | false | openclimatefix | null | openclimatefix/graph-weather-forecaster-0.25deg | 0 | null | null | 36,441 | Entry not found |
openclimatefix/graph-weather-forecaster-0.5deg | e0c5813dfc61fe708b73927ad1a463a126fb75f1 | 2022-03-09T16:15:51.000Z | [
"pytorch"
] | null | false | openclimatefix | null | openclimatefix/graph-weather-forecaster-0.5deg | 0 | null | null | 36,442 | Entry not found |
openclimatefix/graph-weather-forecaster-1.0deg | 524b072a6e8fc6f712596778e3d732130f695fee | 2022-07-04T06:24:35.000Z | [
"pytorch"
] | null | false | openclimatefix | null | openclimatefix/graph-weather-forecaster-1.0deg | 0 | null | null | 36,443 | Entry not found |
gayanin/bart-med-term-mlm | 8cebf37973de5866357347a909f7bfc125c8d12a | 2022-03-08T15:46:48.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | gayanin | null | gayanin/bart-med-term-mlm | 0 | null | transformers | 36,444 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-med-term-mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-med-term-mlm
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2506
- Rouge2 Precision: 0.8338
- Rouge2 Recall: 0.6005
- Rouge2 Fmeasure: 0.6775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.3426 | 1.0 | 15827 | 0.3029 | 0.8184 | 0.5913 | 0.6664 |
| 0.2911 | 2.0 | 31654 | 0.2694 | 0.8278 | 0.5963 | 0.6727 |
| 0.2571 | 3.0 | 47481 | 0.2549 | 0.8318 | 0.5985 | 0.6753 |
| 0.2303 | 4.0 | 63308 | 0.2506 | 0.8338 | 0.6005 | 0.6775 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
huggingtweets/feufillet-greatestquotes-hostagekiller | 64db1cdb4ca37b1625556d1f388b47ade20fec0b | 2022-03-08T13:28:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/feufillet-greatestquotes-hostagekiller | 0 | null | transformers | 36,445 | ---
language: en
thumbnail: http://www.huggingtweets.com/feufillet-greatestquotes-hostagekiller/1646746104400/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1197820815636672513/JSCZmPDf_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1473236995497500675/FtwXDZld_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/378800000520968918/d38fd96468e9ba14c1f9f022eb0c4e61_400x400.png')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">sexy.funny.cute.pix & HUSSY2K. & Great Minds Quotes</div>
<div style="text-align: center; font-size: 14px;">@feufillet-greatestquotes-hostagekiller</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from sexy.funny.cute.pix & HUSSY2K. & Great Minds Quotes.
| Data | sexy.funny.cute.pix | HUSSY2K. | Great Minds Quotes |
| --- | --- | --- | --- |
| Tweets downloaded | 3091 | 3191 | 3200 |
| Retweets | 149 | 865 | 0 |
| Short tweets | 576 | 374 | 2 |
| Tweets kept | 2366 | 1952 | 3198 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3afdee2s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @feufillet-greatestquotes-hostagekiller's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/25fcmxer) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/25fcmxer/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/feufillet-greatestquotes-hostagekiller')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sh0416/clrcmd | 43df4478803e2c2763a42b7cd0907200dfe5ba57 | 2022-03-08T14:28:09.000Z | [
"pytorch",
"license:cc-by-nc-sa-4.0"
] | null | false | sh0416 | null | sh0416/clrcmd | 0 | null | null | 36,446 | ---
license: cc-by-nc-sa-4.0
---
|
13hannes11/master_thesis_models | 3ed3f87ac04b13c8c2659df55943ca1625e4631b | 2022-06-28T21:14:01.000Z | [
"tensorboard",
"focus-prediction",
"microscopy",
"pytorch",
"license:mit"
] | null | false | 13hannes11 | null | 13hannes11/master_thesis_models | 0 | null | null | 36,447 | ---
name: "K-POP"
license: "mit"
metrics:
- MAE
- PLCC
- SRCC
- R2
tags:
- focus-prediction
- microscopy
- pytorch
---
# K-POP: Predicting Distance to Focal Plane for Kato-Katz Prepared Microscopy Slides Using Deep Learning
<a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a><a href="https://pytorchlightning.ai/">
<img alt="Lightning" src="https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning&logoColor=white"></a>
<a href="https://hydra.cc/"><img alt="Config: Hydra" src="https://img.shields.io/badge/Config-Hydra-89b8cd"></a>
## Description
This repository contains the models and training pipeline for my master thesis. The main repository is hosted on [GitHub](https://github.com/13hannes11/master_thesis_code).
The project structure is based on the template by [ashleve](https://github.com/ashleve/lightning-hydra-template).
The metadata is stored in `data/focus150/`. The relevant files are `test_metadata.csv`, `train_metadata.csv` and `validation_metadata.csv`. Image data (of 150 x 150 px images) is not published together with this repository therefore training runs are not possible to do without it. The layout of the metadata files is as follows
```csv
,image_path,scan_uuid,study_id,focus_height,original_filename,stack_id,obj_name
0,31/b0d4005e-57d0-4516-a239-abe02a8d0a67/I02413_X009_Y014_Z5107_750_300.jpg,b0d4005e-57d0-4516-a239-abe02a8d0a67,31,-0.013672000000000017,I02413_X009_Y014_Z5107.jpg,1811661,schistosoma
1,31/274d8969-aa7c-4ac0-be60-e753579393ad/I01981_X019_Y014_Z4931_450_0.jpg,274d8969-aa7c-4ac0-be60-e753579393ad,31,-0.029296999999999962,I01981_X019_Y014_Z4931.jpg,1661371,schistosoma
...
```
## How to run
Train model with chosen experiment configuration from `configs/experiment/`
```bash
python train.py experiment=focusResNet_150
```
Train with hyperparameter search from `configs/hparams_search/`
```bash
python train.py -m hparams_search=focusResNetMSE_150
```
You can override any parameter from command line like this
```bash
python train.py trainer.max_epochs=20 datamodule.batch_size=64
```
## Jupyter notebooks
Figures and other evaluation code was run in Jupyter notebooks. These are available at `notebooks/` |
kevinjesse/codeberta-MT4TS | 69bcf0d6d1aeb11ba321f24d6c454edd593a3008 | 2022-03-09T18:18:24.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | kevinjesse | null | kevinjesse/codeberta-MT4TS | 0 | null | transformers | 36,448 | Entry not found |
kj141/distilbert-base-uncased-finetuned-squad | 66bbd31d99ca681235b2a5ca3ec1fd2ad610946a | 2022-03-23T19:48:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | kj141 | null | kj141/distilbert-base-uncased-finetuned-squad | 0 | null | transformers | 36,449 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huak95/mt-align-finetuned-LST-en-to-th | 6bba8d437958f2f7421c4052b2941832d8fd0de2 | 2022-03-09T20:41:54.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | huak95 | null | huak95/mt-align-finetuned-LST-en-to-th | 0 | null | transformers | 36,450 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mt-align-finetuned-LST-en-to-th
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt-align-finetuned-LST-en-to-th
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-mul](https://huggingface.co/Helsinki-NLP/opus-mt-en-mul) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 77 | 1.6042 | 13.1732 | 26.144 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
huggingtweets/aniraster_ | 4710a24284b1df2462ba6b6abc86087af26ec27b | 2022-03-09T09:03:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/aniraster_ | 0 | null | transformers | 36,451 | ---
language: en
thumbnail: http://www.huggingtweets.com/aniraster_/1646816595677/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1460097593015472141/Yt6YwEU1_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Aniraster</div>
<div style="text-align: center; font-size: 14px;">@aniraster_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Aniraster.
| Data | Aniraster |
| --- | --- |
| Tweets downloaded | 2581 |
| Retweets | 169 |
| Short tweets | 660 |
| Tweets kept | 1752 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3nr4gbjn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aniraster_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3g7h1bov) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3g7h1bov/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/aniraster_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
l53513955/PAQ_256 | 9d609fb6fae14b5488c9d9e56d8acd57a60718c5 | 2022-03-09T09:09:48.000Z | [
"pytorch",
"albert",
"feature-extraction",
"transformers"
] | feature-extraction | false | l53513955 | null | l53513955/PAQ_256 | 0 | null | transformers | 36,452 | Entry not found |
paopow/t5_base | bd0edc2c21f093fb5bfdda5b5b19bc107d894929 | 2022-03-09T14:47:49.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | paopow | null | paopow/t5_base | 0 | null | transformers | 36,453 | Entry not found |
petrichorRainbow/mrf-bert | 1d811b93ee4a1346bcdd5ee564725891c038e8d6 | 2022-03-09T17:12:06.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | petrichorRainbow | null | petrichorRainbow/mrf-bert | 0 | null | transformers | 36,454 | ---
license: apache-2.0
---
|
petrichorRainbow/mrf-covid-bert | 75848a3e0b2660c38cd16ed5cba68d7ff338da4c | 2022-03-09T17:24:51.000Z | [
"pytorch",
"bert",
"transformers",
"license:apache-2.0"
] | null | false | petrichorRainbow | null | petrichorRainbow/mrf-covid-bert | 0 | null | transformers | 36,455 | ---
license: apache-2.0
---
|
pong/opus-mt-en-mul-finetuned-en-to-th | 982b3a991c31c9c1ced377cd888db23a882a8889 | 2022-03-09T18:01:13.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | pong | null | pong/opus-mt-en-mul-finetuned-en-to-th | 0 | null | transformers | 36,456 | Entry not found |
huak95/mt-align-finetuned-SUM3-th-to-en | 73315f4d73c141692f30ab40ce0fcc26ddd44896 | 2022-03-09T20:51:21.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | huak95 | null | huak95/mt-align-finetuned-SUM3-th-to-en | 0 | null | transformers | 36,457 | Entry not found |
tiot07/0310 | b3bde3621555d53102a423ae2a788cf86870af05 | 2022-03-10T06:39:22.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | tiot07 | null | tiot07/0310 | 0 | null | transformers | 36,458 | Entry not found |
huak95/mt-align-LST_classic-th-to-en-pt2 | 9fc1605167b4ad23a52439c3061221a02c438617 | 2022-03-10T09:13:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | huak95 | null | huak95/mt-align-LST_classic-th-to-en-pt2 | 0 | null | transformers | 36,459 | Entry not found |
huak95/LST_classic-th-to-en-pt2.1 | df12a09d1ed3811d7a41fe4c955559dac6979507 | 2022-03-10T09:19:24.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | huak95 | null | huak95/LST_classic-th-to-en-pt2.1 | 0 | null | transformers | 36,460 | Entry not found |
spasis/distilbert-base-uncased-finetuned-imdb-accelerate | 8e82bdacadfe25ea0d87278fdecc3ccbe7445dce | 2022-03-10T12:04:06.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | spasis | null | spasis/distilbert-base-uncased-finetuned-imdb-accelerate | 0 | null | transformers | 36,461 | Entry not found |
timkakhanovich/finetuned-asr | 73d64f6e2504c7b4eea8d8545cf9808e632d6dbc | 2022-03-10T10:53:21.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | timkakhanovich | null | timkakhanovich/finetuned-asr | 0 | null | transformers | 36,462 | Entry not found |
huak95/TNANA-attacut-th-to-en | 87859e56b8929f990770230f2a41da535388bbe3 | 2022-03-10T15:40:30.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | huak95 | null | huak95/TNANA-attacut-th-to-en | 0 | null | transformers | 36,463 | Entry not found |
huggingtweets/atarifounders | ea560d60fa2eebbbbdaa2be2c3656ba64890f9ea | 2022-03-26T03:45:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/atarifounders | 0 | null | transformers | 36,464 | ---
language: en
thumbnail: http://www.huggingtweets.com/atarifounders/1648266306699/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1507523916981583875/6n7ng67H_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">koala/claw/soppy</div>
<div style="text-align: center; font-size: 14px;">@atarifounders</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from koala/claw/soppy.
| Data | koala/claw/soppy |
| --- | --- |
| Tweets downloaded | 3239 |
| Retweets | 129 |
| Short tweets | 883 |
| Tweets kept | 2227 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2gsc0jwi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @atarifounders's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/tl1eu60e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/tl1eu60e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/atarifounders')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
lijingxin/xlm-roberta-base-finetuned-panx-fr | 75fe94e417bc22e5dd77d3a3fbf8d5b5d9b34916 | 2022-03-11T02:19:48.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | lijingxin | null | lijingxin/xlm-roberta-base-finetuned-panx-fr | 0 | null | transformers | 36,465 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.838255033557047
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2691
- F1: 0.8383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5851 | 1.0 | 191 | 0.3202 | 0.8011 |
| 0.256 | 2.0 | 382 | 0.2862 | 0.8344 |
| 0.1725 | 3.0 | 573 | 0.2691 | 0.8383 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huak95/TNANA_V2-attacut-th-to-en-pt2 | 1d1c1359298e83bbbf90ccf0927a5b8e922983f9 | 2022-03-11T17:29:07.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | huak95 | null | huak95/TNANA_V2-attacut-th-to-en-pt2 | 0 | null | transformers | 36,466 | Entry not found |
zuppif/maskformer-swin-small-coco | 81ccd61f1115c48ca4db493c3ec00cb3501f8f50 | 2022-03-11T14:23:35.000Z | [
"pytorch",
"maskformer",
"transformers"
] | null | false | zuppif | null | zuppif/maskformer-swin-small-coco | 0 | null | transformers | 36,467 | Entry not found |
zuppif/maskformer-swin-large-ade | 038c928b990e04a7f3433324bb9ee783c9b33004 | 2022-03-11T14:28:26.000Z | [
"pytorch",
"maskformer",
"transformers"
] | null | false | zuppif | null | zuppif/maskformer-swin-large-ade | 0 | null | transformers | 36,468 | Entry not found |
zuppif/maskformer-swin-tiny-ade | dc866fbdeafe659f6ed8879e75892f77e9a9e751 | 2022-03-11T15:01:00.000Z | [
"pytorch",
"maskformer",
"transformers"
] | null | false | zuppif | null | zuppif/maskformer-swin-tiny-ade | 0 | null | transformers | 36,469 | Entry not found |
huggingtweets/thed3linquent_ | 948e6f9133e95f9cab3f4baeae17613a8ca63df8 | 2022-03-11T22:57:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/thed3linquent_ | 0 | null | transformers | 36,470 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1502166273064517632/RdLwNuR6_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">rogue⛓🐕|| BIRFDAY BOY</div>
<div style="text-align: center; font-size: 14px;">@thed3linquent_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from rogue⛓🐕|| BIRFDAY BOY.
| Data | rogue⛓🐕|| BIRFDAY BOY |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 334 |
| Short tweets | 710 |
| Tweets kept | 2202 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1tal3g38/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @thed3linquent_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1aw76tml) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1aw76tml/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/thed3linquent_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
lilitket/wav2vec2-large-xls-r-300m-hyAM_batch2 | 1f08ccc5853ef5080f49f51a765bbd2cd8ec962f | 2022-03-12T14:52:57.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/wav2vec2-large-xls-r-300m-hyAM_batch2 | 0 | null | transformers | 36,471 | Entry not found |
lilitket/wav2vec2-large-xls-r-300m-hyAM_batch4_lr2 | 690aba7a14a0c95db306468cbd784d2bcc11fe03 | 2022-03-12T16:03:02.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/wav2vec2-large-xls-r-300m-hyAM_batch4_lr2 | 0 | null | transformers | 36,472 | Entry not found |
lilitket/wav2vec2-large-xls-r-300m-hyAM_batch4_lr8 | 16beeb0aefdd2bcc3e9e5cb780a1e27c49e01634 | 2022-03-12T20:58:57.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/wav2vec2-large-xls-r-300m-hyAM_batch4_lr8 | 0 | null | transformers | 36,473 | Entry not found |
lilitket/300m-hyAM_batch4_lr8_warmup4000 | 7c7525017d51f3e7476633a17ae1d06c440fc931 | 2022-03-17T18:50:33.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/300m-hyAM_batch4_lr8_warmup4000 | 0 | null | transformers | 36,474 | Entry not found |
zdepablo/xlm-roberta-base-finetuned-panx-de | eb5298cbd737fbcf33cf9f7678affd139691e912 | 2022-03-12T18:25:42.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | zdepablo | null | zdepablo/xlm-roberta-base-finetuned-panx-de | 0 | null | transformers | 36,475 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8594910162670748
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1348
- F1: 0.8595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2556 | 1.0 | 525 | 0.1629 | 0.8218 |
| 0.1309 | 2.0 | 1050 | 0.1378 | 0.8522 |
| 0.0812 | 3.0 | 1575 | 0.1348 | 0.8595 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
zdepablo/xlm-roberta-base-finetuned-panx-de-fr | fbeb4772ce785f68908426f3b13ddd7df6b59191 | 2022-03-12T18:54:00.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | zdepablo | null | zdepablo/xlm-roberta-base-finetuned-panx-de-fr | 0 | null | transformers | 36,476 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1664
- F1: 0.8556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2846 | 1.0 | 715 | 0.1837 | 0.8247 |
| 0.1446 | 2.0 | 1430 | 0.1617 | 0.8409 |
| 0.0923 | 3.0 | 2145 | 0.1664 | 0.8556 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
lilitket/xls-r-300m-hyAM_batch1_lr2e-05_warmup400 | db529f41916cf30ce2ceff9f1c9a6e1be7ccba74 | 2022-03-13T07:14:22.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/xls-r-300m-hyAM_batch1_lr2e-05_warmup400 | 0 | null | transformers | 36,477 | Entry not found |
lilitket/xls-r-300m-hyAM_batch1_lr1e-05_warmup400 | e685207d23c9448938072f973c5b467e896d9f39 | 2022-03-13T07:41:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/xls-r-300m-hyAM_batch1_lr1e-05_warmup400 | 0 | null | transformers | 36,478 | Entry not found |
holtin/distilbert-base-uncased-finetuned-squad | f0919e96377969142d6c032af9fa355ebb1496bd | 2022-04-07T06:18:52.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | holtin | null | holtin/distilbert-base-uncased-finetuned-squad | 0 | null | transformers | 36,479 | Entry not found |
lilitket/xls-r-300m-hyAM_batch1_lr6e-06_warmup400 | 50c1e94bfd9a4222e7d26ebe4ab59a80f6194f8a | 2022-03-20T20:17:52.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/xls-r-300m-hyAM_batch1_lr6e-06_warmup400 | 0 | null | transformers | 36,480 | Entry not found |
sanchit-gandhi/wav2vec2-2-roberta-no-adapter-long-run | d126f4a7fdf2bde7ba506959857bf654f02eb442 | 2022-03-14T11:01:26.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-roberta-no-adapter-long-run | 0 | null | transformers | 36,481 | Entry not found |
huggingtweets/mikepompeo | 39ec8a5587a6779f92817b10fd3ef6b9ef84d119 | 2022-03-13T14:28:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/mikepompeo | 0 | null | transformers | 36,482 | ---
language: en
thumbnail: http://www.huggingtweets.com/mikepompeo/1647181695747/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1498704685875744769/r3jThh-E_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mike Pompeo</div>
<div style="text-align: center; font-size: 14px;">@mikepompeo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mike Pompeo.
| Data | Mike Pompeo |
| --- | --- |
| Tweets downloaded | 1899 |
| Retweets | 68 |
| Short tweets | 60 |
| Tweets kept | 1771 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ll5re58/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mikepompeo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zi1wgzl5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zi1wgzl5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mikepompeo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
newtonkwan/gpt2-ft-with-non-challenging | 6c1222d90d860aaeb135cce6b000dddd23348efa | 2022-03-13T21:31:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | newtonkwan | null | newtonkwan/gpt2-ft-with-non-challenging | 0 | null | transformers | 36,483 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-ft-with-non-challenging
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-ft-with-non-challenging
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 4.0984 |
| No log | 2.0 | 2 | 4.0802 |
| No log | 3.0 | 3 | 4.0443 |
| No log | 4.0 | 4 | 3.9906 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 1.18.4
- Tokenizers 0.11.6
|
lilitket/20220313-221906 | 4e8edea25bf164e0a8ed1f0b5ec22ee51d88be19 | 2022-03-14T04:27:43.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220313-221906 | 0 | null | transformers | 36,484 | Entry not found |
huggingtweets/ayurastro | b91d7fa463d6aacdf3de36d014a4fd562a6b630e | 2022-03-13T23:27:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/ayurastro | 0 | null | transformers | 36,485 | ---
language: en
thumbnail: http://www.huggingtweets.com/ayurastro/1647214031676/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/493786234221641730/OFQm2K8M_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">AyurAstro®</div>
<div style="text-align: center; font-size: 14px;">@ayurastro</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from AyurAstro®.
| Data | AyurAstro® |
| --- | --- |
| Tweets downloaded | 1437 |
| Retweets | 112 |
| Short tweets | 65 |
| Tweets kept | 1260 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/36zw53cv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ayurastro's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/nhbmyyli) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/nhbmyyli/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ayurastro')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tau/fewsion_1024_0.3_2100 | c82a58ef2aeb9b3372631dd1040feaae35f9bb05 | 2022-03-14T08:36:20.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/fewsion_1024_0.3_2100 | 0 | null | transformers | 36,486 | Entry not found |
tau/t5_1024_0.3_2400 | 4b3fb9e72af44a3c1f99415ec4949ddf28707576 | 2022-03-14T08:46:55.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/t5_1024_0.3_2400 | 0 | null | transformers | 36,487 | Entry not found |
lilitket/20220314-084929 | 76c5be10e2c9b620885461e93f6de52ea1c15da8 | 2022-03-14T13:26:14.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220314-084929 | 0 | null | transformers | 36,488 | Entry not found |
sanchit-gandhi/wav2vec2-2-bert-large-no-adapter | b11802c5a1eadd0abd0c3b9e3027a7caa819c225 | 2022-03-15T17:22:33.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-bert-large-no-adapter | 0 | null | transformers | 36,489 | Entry not found |
peterhsu/codeparrot-ds | ea65cf18f515ffe2eda0a72ea58ed0d7f9f526ad | 2022-03-14T23:00:48.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | peterhsu | null | peterhsu/codeparrot-ds | 0 | null | transformers | 36,490 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4939 | 0.93 | 5000 | 1.9729 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
newtonkwan/gpt2-xl-ft-with-non-challenging-25k | 3d10551c6ecab21243f47a46f2e41545e616a560 | 2022-03-15T00:06:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | newtonkwan | null | newtonkwan/gpt2-xl-ft-with-non-challenging-25k | 0 | null | transformers | 36,491 | Entry not found |
tau/t5_1024_0.3_7950 | 619e06eb26ab187968ed87b3dfde7d024465ea8f | 2022-03-15T07:29:37.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/t5_1024_0.3_7950 | 0 | null | transformers | 36,492 | Entry not found |
Norod78/ml-generated-muppets-rudalle | 43559f240be193f32836a24406d6e6736a42cad0 | 2022-03-15T10:02:58.000Z | [
"pytorch",
"license:mit"
] | null | false | Norod78 | null | Norod78/ml-generated-muppets-rudalle | 0 | null | null | 36,493 | ---
license: mit
---
Muppet image generator, based on ruDALL-E.
You can perform inference using this [Colab notebook](https://github.com/Norod/my-colab-experiments/blob/master/ruDALLE_muppets_norod78.ipynb)

|
zuppif/resnetd-18 | 0d36c4fbc31431b03072141da0e4ba0a55a7af0f | 2022-03-17T09:08:23.000Z | [
"pytorch",
"resnetd",
"transformers"
] | null | false | zuppif | null | zuppif/resnetd-18 | 0 | null | transformers | 36,494 | Entry not found |
zuppif/resnetd-101 | 232531b093321fe8f34fd4a28d5c7fc9564a8907 | 2022-03-17T09:13:10.000Z | [
"pytorch",
"resnetd",
"transformers"
] | null | false | zuppif | null | zuppif/resnetd-101 | 0 | null | transformers | 36,495 | Entry not found |
zuppif/resnetd-200 | 41253945cdbde0dce274d7413e99e97f64c4d424 | 2022-03-17T09:18:51.000Z | [
"pytorch",
"resnetd",
"transformers"
] | null | false | zuppif | null | zuppif/resnetd-200 | 0 | null | transformers | 36,496 | Entry not found |
spasis/marian-finetuned-kde4-en-to-fr | 40cbbd3582645298cb26de24efd54ae12e7ae605 | 2022-03-15T17:39:40.000Z | [
"pytorch",
"marian",
"text2text-generation",
"dataset:kde4",
"transformers",
"tanslation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | spasis | null | spasis/marian-finetuned-kde4-en-to-fr | 0 | null | transformers | 36,497 | ---
license: apache-2.0
tags:
- tanslation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
moralstories/gpt2_action_context-consequence | 284a29966aaa68ab47729808b3b22cbac493f06f | 2022-03-15T18:13:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:afl-3.0"
] | text-generation | false | moralstories | null | moralstories/gpt2_action_context-consequence | 0 | null | transformers | 36,498 | ---
license: afl-3.0
---
|
facebook/regnet-x-016 | 5f7992cd8a33f3be2417b0a7b91f349ca6ad2932 | 2022-06-30T10:14:50.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/regnet-x-016 | 0 | null | transformers | 36,499 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.