modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
β | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
β | likes
float64 0
712
β | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
facebook/regnet-x-032 | c2f07bf7b2d97ae5279125dd15ba52456c2b64e2 | 2022-06-30T10:14:28.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/regnet-x-032 | 0 | null | transformers | 36,500 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
krinal214/bert-all | 76a7fc429293e49c41464ef839cc01093ea2de90 | 2022-03-15T21:02:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:tydiqa",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | krinal214 | null | krinal214/bert-all | 0 | null | transformers | 36,501 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: bert-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-all
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tydiqa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1556 | 1.0 | 3552 | 0.5985 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
huggingtweets/theshiftnews | c9da2de7c6dc40de124deb4c8cec3979bb1f66f1 | 2022-03-15T20:56:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/theshiftnews | 0 | null | transformers | 36,502 | ---
language: en
thumbnail: http://www.huggingtweets.com/theshiftnews/1647377809961/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1318831968352612352/blMpdUu4_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Shift News</div>
<div style="text-align: center; font-size: 14px;">@theshiftnews</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from The Shift News.
| Data | The Shift News |
| --- | --- |
| Tweets downloaded | 3216 |
| Retweets | 446 |
| Short tweets | 43 |
| Tweets kept | 2727 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1k4siv5q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @theshiftnews's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2cedhhrz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2cedhhrz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/theshiftnews')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/maltatoday-netnewsmalta-one_news_malta | e4ea8f1e4c1623810d2abd8ad155a725e5f6dad0 | 2022-03-15T21:21:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/maltatoday-netnewsmalta-one_news_malta | 0 | null | transformers | 36,503 | ---
language: en
thumbnail: http://www.huggingtweets.com/maltatoday-netnewsmalta-one_news_malta/1647379141053/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1442160889596026883/gq6jcObz_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1047423145077030912/0B4-Tgba_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1333858206012084227/XP6EKW-K_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ONE news & NETnews & MaltaToday</div>
<div style="text-align: center; font-size: 14px;">@maltatoday-netnewsmalta-one_news_malta</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ONE news & NETnews & MaltaToday.
| Data | ONE news | NETnews | MaltaToday |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 3250 | 3250 |
| Retweets | 0 | 0 | 1 |
| Short tweets | 17 | 1 | 3 |
| Tweets kept | 3233 | 3249 | 3246 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1lme9vpn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @maltatoday-netnewsmalta-one_news_malta's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zkwd2sgh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zkwd2sgh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/maltatoday-netnewsmalta-one_news_malta')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/independentmlt-maltatoday-thetimesofmalta | e6b0986f44f803a91e90ecca2f310d1189fd6df2 | 2022-03-15T22:00:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/independentmlt-maltatoday-thetimesofmalta | 0 | null | transformers | 36,504 | ---
language: en
thumbnail: http://www.huggingtweets.com/independentmlt-maltatoday-thetimesofmalta/1647381547913/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1333858206012084227/XP6EKW-K_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1419612859244457987/Ph3kXUL3_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1338811551994826752/XQnrubON_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">MaltaToday & Times of Malta & The Malta Independent</div>
<div style="text-align: center; font-size: 14px;">@independentmlt-maltatoday-thetimesofmalta</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from MaltaToday & Times of Malta & The Malta Independent.
| Data | MaltaToday | Times of Malta | The Malta Independent |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 3250 | 3250 |
| Retweets | 1 | 0 | 5 |
| Short tweets | 3 | 0 | 1 |
| Tweets kept | 3246 | 3250 | 3244 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2z9a8ves/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @independentmlt-maltatoday-thetimesofmalta's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/117uvo5a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/117uvo5a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/independentmlt-maltatoday-thetimesofmalta')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kSaluja/roberta-finetuned-ner | 0587792d41258f900e6f493efa3cbbc586bd3726 | 2022-03-16T00:00:41.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | kSaluja | null | kSaluja/roberta-finetuned-ner | 0 | null | transformers | 36,505 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-ner
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1322
- Precision: 0.9772
- Recall: 0.9782
- F1: 0.9777
- Accuracy: 0.9767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 253 | 0.1694 | 0.9636 | 0.9555 | 0.9595 | 0.9617 |
| 0.4479 | 2.0 | 506 | 0.1374 | 0.9743 | 0.9762 | 0.9752 | 0.9743 |
| 0.4479 | 3.0 | 759 | 0.1322 | 0.9772 | 0.9782 | 0.9777 | 0.9767 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
willcai/wav2vec2_common_voice_accents_3 | 32b359201268a0e60a1f7aa870d30ff170b61885 | 2022-03-17T03:04:51.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | willcai | null | willcai/wav2vec2_common_voice_accents_3 | 0 | null | transformers | 36,506 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_common_voice_accents_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents_3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.584 | 1.27 | 400 | 1.1439 |
| 0.481 | 2.55 | 800 | 0.1986 |
| 0.2384 | 3.82 | 1200 | 0.1060 |
| 0.1872 | 5.1 | 1600 | 0.1016 |
| 0.158 | 6.37 | 2000 | 0.0942 |
| 0.1427 | 7.64 | 2400 | 0.0646 |
| 0.1306 | 8.92 | 2800 | 0.0612 |
| 0.1197 | 10.19 | 3200 | 0.0423 |
| 0.1129 | 11.46 | 3600 | 0.0381 |
| 0.1054 | 12.74 | 4000 | 0.0326 |
| 0.0964 | 14.01 | 4400 | 0.0293 |
| 0.0871 | 15.29 | 4800 | 0.0239 |
| 0.0816 | 16.56 | 5200 | 0.0168 |
| 0.0763 | 17.83 | 5600 | 0.0202 |
| 0.0704 | 19.11 | 6000 | 0.0224 |
| 0.0669 | 20.38 | 6400 | 0.0208 |
| 0.063 | 21.66 | 6800 | 0.0074 |
| 0.0585 | 22.93 | 7200 | 0.0126 |
| 0.0548 | 24.2 | 7600 | 0.0086 |
| 0.0512 | 25.48 | 8000 | 0.0080 |
| 0.0487 | 26.75 | 8400 | 0.0052 |
| 0.0455 | 28.03 | 8800 | 0.0062 |
| 0.0433 | 29.3 | 9200 | 0.0042 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
kSaluja/roberta-finetuned-ner-without-data-sort | d8afdcca4a015ce9d24c0e4487711ce09dd2799a | 2022-03-16T01:27:44.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | kSaluja | null | kSaluja/roberta-finetuned-ner-without-data-sort | 0 | null | transformers | 36,507 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-finetuned-ner-without-data-sort
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-ner-without-data-sort
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0420
- Precision: 0.9914
- Recall: 0.9909
- F1: 0.9912
- Accuracy: 0.9920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.1879 | 0.9378 | 0.9414 | 0.9396 | 0.9493 |
| No log | 2.0 | 426 | 0.1038 | 0.9725 | 0.9750 | 0.9737 | 0.9751 |
| 0.4424 | 3.0 | 639 | 0.0701 | 0.9861 | 0.9851 | 0.9856 | 0.9863 |
| 0.4424 | 4.0 | 852 | 0.0637 | 0.9882 | 0.9880 | 0.9881 | 0.9880 |
| 0.0675 | 5.0 | 1065 | 0.0546 | 0.9851 | 0.9878 | 0.9865 | 0.9879 |
| 0.0675 | 6.0 | 1278 | 0.0480 | 0.9894 | 0.9904 | 0.9899 | 0.9901 |
| 0.0675 | 7.0 | 1491 | 0.0473 | 0.9919 | 0.9904 | 0.9912 | 0.9911 |
| 0.0426 | 8.0 | 1704 | 0.0441 | 0.9921 | 0.9916 | 0.9919 | 0.9921 |
| 0.0426 | 9.0 | 1917 | 0.0426 | 0.9921 | 0.9916 | 0.9919 | 0.9922 |
| 0.033 | 10.0 | 2130 | 0.0420 | 0.9914 | 0.9909 | 0.9912 | 0.9920 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
libalabala/marian-finetuned-kde4-en-to-fr | 129f66031b566e4c281679da03e5a6082e740d80 | 2022-03-17T08:13:54.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:kde4",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | libalabala | null | libalabala/marian-finetuned-kde4-en-to-fr | 0 | null | transformers | 36,508 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
sraza/wav2vec2-large-xls-r-300m-ur-colab | 95a2d55143b4d15afefd528159b34f6f1edccdd7 | 2022-06-07T06:57:19.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | sraza | null | sraza/wav2vec2-large-xls-r-300m-ur-colab | 0 | 1 | transformers | 36,509 | ASR for urdu language.
Dataset used is common voice and also some self collected data. |
mazenalasali/layoutlmv2-finetuned-funsd-test | 73090e876b5906cb44383124c1eb809a10462eba | 2022-03-16T13:02:29.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | mazenalasali | null | mazenalasali/layoutlmv2-finetuned-funsd-test | 0 | null | transformers | 36,510 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-finetuned-funsd-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-funsd-test
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.8.0+cu101
- Datasets 2.0.0
- Tokenizers 0.11.6
|
krinal214/xlm-3lang | a882fe9e6b96617f34a0706960727bc571439cd7 | 2022-03-16T12:55:35.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"dataset:tydiqa",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | krinal214 | null | krinal214/xlm-3lang | 0 | null | transformers | 36,511 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: xlm-eng-beng-tel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-eng-beng-tel
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the tydiqa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2927 | 1.0 | 810 | 0.7303 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
newtonkwan/gpt2-xl-ft-0 | db4a67ee4b48f80835f01f347b6563a004db673e | 2022-03-16T21:58:33.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | newtonkwan | null | newtonkwan/gpt2-xl-ft-0 | 0 | null | transformers | 36,512 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-0
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.96 | 6 | 5.1701 |
| No log | 1.96 | 12 | 4.1214 |
| No log | 2.96 | 18 | 2.5305 |
| No log | 3.96 | 24 | 2.0324 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 17.31455421447754
### Dataset Size
Size: 1000 |
horsbug98/Part_2_mBERT_Model_E1 | e4c205ab6b6426bb5e73a2a2daf75391f1db8806 | 2022-03-16T17:01:57.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:tydiqa",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | horsbug98 | null | horsbug98/Part_2_mBERT_Model_E1 | 0 | null | transformers | 36,513 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: debug_mbert_task2_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# debug_mbert_task2_1
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tydiqa secondary_task dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
nandezgarcia/distilbert-base-uncased-finetuned-squad-d5716d28 | f7ee7a8a1c00fbe8bd63b5b39f56c92e631b896f | 2022-03-16T18:26:49.000Z | [
"pytorch",
"en",
"dataset:squad",
"arxiv:1910.01108",
"question-answering",
"license:apache-2.0"
] | question-answering | false | nandezgarcia | null | nandezgarcia/distilbert-base-uncased-finetuned-squad-d5716d28 | 0 | null | null | 36,514 | ---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
newtonkwan/gpt2-xl-ft-1 | e31681d9c0b1f44f2bb0ece35e1417058f31bdbc | 2022-03-16T23:52:23.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | newtonkwan | null | newtonkwan/gpt2-xl-ft-1 | 0 | null | transformers | 36,515 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-with-non-challenging
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-with-non-challenging
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2020
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 31 | 1.5517 |
| No log | 1.99 | 62 | 1.3733 |
| No log | 2.99 | 93 | 1.4207 |
| No log | 3.99 | 124 | 1.4872 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
### Perplexity
Score: 28.26373863220215
### Dataset Size
Size: 5000 |
radev/xlm-roberta-base-finetuned-panx-de | 7509bc5d172ff94e83a2a43745e655b52ea1cb49 | 2022-03-23T22:27:27.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | radev | null | radev/xlm-roberta-base-finetuned-panx-de | 0 | null | transformers | 36,516 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8593216480764853
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1345
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 263 | 0.1807 | 0.8065 |
| 0.2218 | 2.0 | 526 | 0.1365 | 0.8485 |
| 0.2218 | 3.0 | 789 | 0.1345 | 0.8593 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/ericson_ubbhult | aea9b060ee62687896756b9314a5a21af9d65867 | 2022-05-31T08:40:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/ericson_ubbhult | 0 | null | transformers | 36,517 | ---
language: en
thumbnail: http://www.huggingtweets.com/ericson_ubbhult/1653986423351/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1829196789/bild_400x400.JPG')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jan Ericson πΈπͺπΊπ¦</div>
<div style="text-align: center; font-size: 14px;">@ericson_ubbhult</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jan Ericson πΈπͺπΊπ¦.
| Data | Jan Ericson πΈπͺπΊπ¦ |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 434 |
| Short tweets | 232 |
| Tweets kept | 2583 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/imczgylz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ericson_ubbhult's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1mmecont) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1mmecont/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ericson_ubbhult')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
negfir/Distill_4L | 8b26884cda182d4c3a282a833fc13efef715d399 | 2022-03-17T01:15:51.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/Distill_4L | 0 | null | transformers | 36,518 | Entry not found |
lijingxin/mt5_squad_zen_qg | 92cae6a68faa8641e55e839d71f56384ef2d14c6 | 2022-03-17T08:54:02.000Z | [
"pytorch"
] | null | false | lijingxin | null | lijingxin/mt5_squad_zen_qg | 0 | null | null | 36,519 | Entry not found |
huggingtweets/missdaytona | 2da37ecd99a863945b0c77c4b6e3c3b9eaf14014 | 2022-03-17T10:44:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/missdaytona | 0 | null | transformers | 36,520 | ---
language: en
thumbnail: http://www.huggingtweets.com/missdaytona/1647513656155/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1487686479/Tanner1_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">xx</div>
<div style="text-align: center; font-size: 14px;">@missdaytona</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from xx.
| Data | xx |
| --- | --- |
| Tweets downloaded | 162 |
| Retweets | 0 |
| Short tweets | 29 |
| Tweets kept | 133 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2gy072xq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @missdaytona's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/8310y47m) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/8310y47m/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/missdaytona')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
saghar/TinyBERT_L-4_H-312_v2-finetuned-wikitext103 | a6f79a9bce22cb094fa6b0598487e1ceec701e96 | 2022-03-17T15:59:39.000Z | [
"pytorch",
"bert",
"fill-mask",
"dataset:wikitext",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | saghar | null | saghar/TinyBERT_L-4_H-312_v2-finetuned-wikitext103 | 0 | null | transformers | 36,521 | ---
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: TinyBERT_L-4_H-312_v2-finetuned-wikitext103
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TinyBERT_L-4_H-312_v2-finetuned-wikitext103
This model is a fine-tuned version of [nreimers/TinyBERT_L-4_H-312_v2](https://huggingface.co/nreimers/TinyBERT_L-4_H-312_v2) on the wikitext dataset.
It achieves the following results on the evaluation set:
- Loss: 6.4638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0604 | 1.0 | 3125 | 6.6745 |
| 6.7122 | 2.0 | 6250 | 6.5061 |
| 6.6289 | 3.0 | 9375 | 6.4638 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.3
|
mideind/IceBERT-mC4-is | 6802afb1a400df0c5a5eb9eb508cdf7ad8b07a48 | 2022-03-17T14:05:41.000Z | [
"pytorch",
"roberta",
"fill-mask",
"is",
"arxiv:2201.05601",
"transformers",
"icelandic",
"masked-lm",
"license:agpl-3.0",
"autotrain_compatible"
] | fill-mask | false | mideind | null | mideind/IceBERT-mC4-is | 0 | null | transformers | 36,522 | ---
language: is
widget:
- text: MΓ‘ bjΓ³Γ°a ΓΎΓ©r <mask> Γ kvΓΆld?
- text: Forseti <mask> er Ñgæt.
- text: SΓΊpan var <mask> Γ‘ bragΓ°iΓ°.
tags:
- roberta
- icelandic
- masked-lm
- pytorch
license: agpl-3.0
---
*We do not recommend the use of this model besides for comparison with the other IceBERT models*
# IceBERT-mC4-is
This model was trained with fairseq using the RoBERTa-base architecture. It is one of many models we have trained for Icelandic, see the paper referenced below for further details. It was trained on the Icelandic part of the mC4 dataset.
## Citation
The model is described in this paper [https://arxiv.org/abs/2201.05601](https://arxiv.org/abs/2201.05601). Please cite the paper if you make use of the model.
```
@article{DBLP:journals/corr/abs-2201-05601,
author = {V{\'{e}}steinn Sn{\ae}bjarnarson and
Haukur Barri S{\'{\i}}monarson and
P{\'{e}}tur Orri Ragnarsson and
Svanhv{\'{\i}}t Lilja Ing{\'{o}}lfsd{\'{o}}ttir and
Haukur P{\'{a}}ll J{\'{o}}nsson and
Vilhj{\'{a}}lmur {\TH}orsteinsson and
Hafsteinn Einarsson},
title = {A Warm Start and a Clean Crawled Corpus - {A} Recipe for Good Language
Models},
journal = {CoRR},
volume = {abs/2201.05601},
year = {2022},
url = {https://arxiv.org/abs/2201.05601},
eprinttype = {arXiv},
eprint = {2201.05601},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-05601.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
mideind/IceBERT-xlmr-ic3 | 51ac5ff8594fb6c26028bd3cf700a9c91cbf9d9f | 2022-03-17T14:02:17.000Z | [
"pytorch",
"roberta",
"fill-mask",
"is",
"arxiv:2201.05601",
"transformers",
"icelandic",
"masked-lm",
"license:agpl-3.0",
"autotrain_compatible"
] | fill-mask | false | mideind | null | mideind/IceBERT-xlmr-ic3 | 0 | null | transformers | 36,523 | ---
language: is
widget:
- text: MΓ‘ bjΓ³Γ°a ΓΎΓ©r <mask> Γ kvΓΆld?
- text: Forseti <mask> er Ñgæt.
- text: SΓΊpan var <mask> Γ‘ bragΓ°iΓ°.
tags:
- roberta
- icelandic
- masked-lm
- pytorch
license: agpl-3.0
---
# IceBERT-xlmr-ic3
This model was trained with fairseq using the RoBERTa-base architecture. The model `xlm-roberta-base` was used as a starting point. It is one of many models we have trained for Icelandic, see the paper referenced below for further details. The training data used is shown in the table below.
| Dataset | Size | Tokens |
|------------------------------------------------------|---------|--------|
| Icelandic Common Crawl Corpus (IC3) | 4.9 GB | 824M |
## Citation
The model is described in this paper [https://arxiv.org/abs/2201.05601](https://arxiv.org/abs/2201.05601). Please cite the paper if you make use of the model.
```
@article{DBLP:journals/corr/abs-2201-05601,
author = {V{\'{e}}steinn Sn{\ae}bjarnarson and
Haukur Barri S{\'{\i}}monarson and
P{\'{e}}tur Orri Ragnarsson and
Svanhv{\'{\i}}t Lilja Ing{\'{o}}lfsd{\'{o}}ttir and
Haukur P{\'{a}}ll J{\'{o}}nsson and
Vilhj{\'{a}}lmur {\TH}orsteinsson and
Hafsteinn Einarsson},
title = {A Warm Start and a Clean Crawled Corpus - {A} Recipe for Good Language
Models},
journal = {CoRR},
volume = {abs/2201.05601},
year = {2022},
url = {https://arxiv.org/abs/2201.05601},
eprinttype = {arXiv},
eprint = {2201.05601},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-05601.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
sanchit-gandhi/wav2vec2-2-bart-debug | fb493c8c3f5b768ee26118fde0d9a82b1f8a64fd | 2022-03-17T16:28:55.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-bart-debug | 0 | null | transformers | 36,524 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
transZ/BART_shared_aug | fc40ec2c70d30620befbd1c5c99daaeba6f44614 | 2022-04-15T11:08:38.000Z | [
"pytorch",
"shared_bart",
"transformers"
] | null | false | transZ | null | transZ/BART_shared_aug | 0 | null | transformers | 36,525 | Entry not found |
niksss/Hinglish-HATEBERT | 635c85ccc835f6b51c8905eda7072e80ba737e50 | 2022-03-17T18:43:00.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers",
"license:afl-3.0"
] | feature-extraction | false | niksss | null | niksss/Hinglish-HATEBERT | 0 | null | transformers | 36,526 | ---
license: afl-3.0
---
Fine-Tune it using this [nb](https://colab.research.google.com/drive/1JRmrAYR0pcEWyni_VtT4SSFxZ5adlAhS?usp=sharing) |
artemis13fowl/bert-base-cased-imdb | 02268ffdcad91ee5ccfc0565fecaa8ce4c0ef6bb | 2022-03-18T10:01:35.000Z | [
"pytorch"
] | null | false | artemis13fowl | null | artemis13fowl/bert-base-cased-imdb | 0 | null | null | 36,527 | Entry not found |
artemis13fowl/bert-base-cased-imdb-tmp | 64f89a3dad051077d0cffac3192afd0656ff75fe | 2022-03-18T09:53:17.000Z | [
"pytorch"
] | null | false | artemis13fowl | null | artemis13fowl/bert-base-cased-imdb-tmp | 0 | null | null | 36,528 | Entry not found |
nairoj/Bert_ANT | 677c5a6eb063e68a284897df74c955c411f7f64d | 2022-05-30T14:29:36.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | nairoj | null | nairoj/Bert_ANT | 0 | null | transformers | 36,529 | ---
license: mit
---
|
facebook/regnet-x-080 | 4f41bade4f37141a9aea824fd3bf7519733f0a46 | 2022-06-30T10:14:32.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/regnet-x-080 | 0 | null | transformers | 36,530 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
facebook/regnet-x-160 | 30ed4735d93a87db5d5b2c41c0c7049c13b01265 | 2022-06-30T10:14:35.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/regnet-x-160 | 0 | null | transformers | 36,531 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
facebook/regnet-y-016 | 5f453e35ddd0a5c1297dec982ac984a1359a8850 | 2022-06-28T11:38:42.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/regnet-y-016 | 0 | null | transformers | 36,532 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
huggingtweets/sappublicsector | 2e18aab40d1f2ffe63a52d119fe53a451e663995 | 2022-03-18T17:46:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/sappublicsector | 0 | null | transformers | 36,533 | ---
language: en
thumbnail: http://www.huggingtweets.com/sappublicsector/1647625586483/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1486782108030930950/2JS43mTA_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">SAP Public Sector</div>
<div style="text-align: center; font-size: 14px;">@sappublicsector</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from SAP Public Sector.
| Data | SAP Public Sector |
| --- | --- |
| Tweets downloaded | 3200 |
| Retweets | 38 |
| Short tweets | 0 |
| Tweets kept | 3162 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2alb74qi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sappublicsector's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/sppp2pwd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/sppp2pwd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sappublicsector')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
lilitket/xlsrhylm | b008f39a81dd60bd8942eb477b17a89d8d3fb51b | 2022-03-19T00:55:04.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/xlsrhylm | 0 | null | transformers | 36,534 | Entry not found |
huggingtweets/abombayboy | 155cd3400d646f558929100dbbd399fa7ba46a27 | 2022-03-19T16:13:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/abombayboy | 0 | null | transformers | 36,535 | ---
language: en
thumbnail: http://www.huggingtweets.com/abombayboy/1647706387106/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1465673407178043396/aYbTBRbu_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bombay Boy</div>
<div style="text-align: center; font-size: 14px;">@abombayboy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bombay Boy.
| Data | Bombay Boy |
| --- | --- |
| Tweets downloaded | 3238 |
| Retweets | 927 |
| Short tweets | 181 |
| Tweets kept | 2130 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3paz3q98/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @abombayboy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/331ordwj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/331ordwj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/abombayboy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
lilitket/xlsrhylm_new | b281d3db8e2f0eb72f7cb7c08c5c65d2f469544f | 2022-03-19T18:14:12.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/xlsrhylm_new | 0 | null | transformers | 36,536 | Entry not found |
saghar/xtremedistil-l6-h384-uncased-finetuned-wikitext103 | 51fe482cad6255dd36a48bd62fdb1a6b5cfd0abd | 2022-03-20T23:45:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"dataset:wikitext",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | saghar | null | saghar/xtremedistil-l6-h384-uncased-finetuned-wikitext103 | 0 | null | transformers | 36,537 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: xtremedistil-l6-h384-uncased-finetuned-wikitext103
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtremedistil-l6-h384-uncased-finetuned-wikitext103
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) on the wikitext dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.1974 | 1.0 | 3125 | 6.7483 |
| 6.8171 | 2.0 | 6250 | 6.5962 |
| 6.7483 | 3.0 | 9375 | 6.5526 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.11.0
- Datasets 1.1.1
- Tokenizers 0.10.1
|
willcai/wav2vec2_common_voice_accents_6 | 6b37d3882b9045280124b84d9d3b73a6f580b128 | 2022-03-20T08:23:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | willcai | null | willcai/wav2vec2_common_voice_accents_6 | 0 | null | transformers | 36,538 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_common_voice_accents_6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents_6
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8539 | 25.0 | 400 | 0.3711 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
pinkducky/Monica_Bot | dc42f3598b1113eda2c2295a2a090ff50726c6c0 | 2022-03-20T13:16:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | pinkducky | null | pinkducky/Monica_Bot | 0 | null | transformers | 36,539 | ---
tags:
- conversational
---
# My Awesome Model
|
wasilkas/wav2vec2-base-timit-demo-colab | c2daa6d33a2e222d6aa33dec71c6d49b69c5e661 | 2022-03-20T20:04:11.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | wasilkas | null | wasilkas/wav2vec2-base-timit-demo-colab | 0 | null | transformers | 36,540 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the TIMIT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4491
- Wer: 0.3382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4787 | 4.0 | 500 | 1.4190 | 0.9939 |
| 0.5835 | 8.0 | 1000 | 0.4711 | 0.4370 |
| 0.219 | 12.0 | 1500 | 0.4555 | 0.3994 |
| 0.1251 | 16.0 | 2000 | 0.4515 | 0.3654 |
| 0.0834 | 20.0 | 2500 | 0.4923 | 0.3564 |
| 0.0632 | 24.0 | 3000 | 0.4410 | 0.3399 |
| 0.0491 | 28.0 | 3500 | 0.4491 | 0.3382 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
snehatyagi/wav2vec2_timit | 7ca4c48bdd89ff2464c6fb337f351c021fd15ea2 | 2022-03-23T05:41:25.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | snehatyagi | null | snehatyagi/wav2vec2_timit | 0 | null | transformers | 36,541 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2_timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_timit
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0791
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.1506 | 2.4 | 300 | 3.1294 | 1.0 |
| 3.0957 | 4.8 | 600 | 3.0791 | 1.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.6
|
tau/fewsion_2_1024_0.3_epoch1 | 68a87bfab18b5773e3aa09dcd0d85f8d886a9de6 | 2022-03-21T07:48:34.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/fewsion_2_1024_0.3_epoch1 | 0 | null | transformers | 36,542 | Entry not found |
tau/pegasus_1024_0.3_epoch1_v2 | 4da6a01fd5f5c446a871e8064692a59f2255c3e2 | 2022-03-21T07:53:47.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/pegasus_1024_0.3_epoch1_v2 | 0 | null | transformers | 36,543 | Entry not found |
tau/random_1024_0.3_epoch1_v2 | fe801f51d07d7cb8f3da162fda8f36781af61e2f | 2022-03-21T07:58:59.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/random_1024_0.3_epoch1_v2 | 0 | null | transformers | 36,544 | Entry not found |
tau/t5_1024_0.3_epoch1_v2 | cefb4d893c8fd080e9c8e68ba2328190b2324562 | 2022-03-21T08:04:37.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/t5_1024_0.3_epoch1_v2 | 0 | null | transformers | 36,545 | Entry not found |
huggingtweets/victoriamonet | 1287bde7987dffa450938ede8e5a1e97fae5d043 | 2022-03-21T13:07:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/victoriamonet | 0 | null | transformers | 36,546 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1504478055275802628/EuQs8_M7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Victoria MonΓ©t</div>
<div style="text-align: center; font-size: 14px;">@victoriamonet</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Victoria MonΓ©t.
| Data | Victoria MonΓ©t |
| --- | --- |
| Tweets downloaded | 3172 |
| Retweets | 302 |
| Short tweets | 593 |
| Tweets kept | 2277 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2qwme5s7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @victoriamonet's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1zqoy9ki) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1zqoy9ki/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/victoriamonet')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/rupertboneham-rupertskids-survivorcbs | dfde4fd79f34ff824a3b6c1014940fc23774fb3a | 2022-03-21T13:31:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/rupertboneham-rupertskids-survivorcbs | 0 | null | transformers | 36,547 | ---
language: en
thumbnail: http://www.huggingtweets.com/rupertboneham-rupertskids-survivorcbs/1647869465531/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2879716355/bd3a0d75f2ec004c61cf470e66895eda_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/984777181963448321/GZEqLnVr_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1488244197467381765/3F2BzfCJ_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rupert Boneham & Rupert Boneham & SURVIVOR</div>
<div style="text-align: center; font-size: 14px;">@rupertboneham-rupertskids-survivorcbs</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rupert Boneham & Rupert Boneham & SURVIVOR.
| Data | Rupert Boneham | Rupert Boneham | SURVIVOR |
| --- | --- | --- | --- |
| Tweets downloaded | 3139 | 352 | 3222 |
| Retweets | 710 | 151 | 551 |
| Short tweets | 142 | 17 | 540 |
| Tweets kept | 2287 | 184 | 2131 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2m3rl64a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rupertboneham-rupertskids-survivorcbs's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1o5vktei) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1o5vktei/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rupertboneham-rupertskids-survivorcbs')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ukr-models/uk-ner-quantized | 49e0989c9bb6c908bc09864e96e57e48a5af9bb7 | 2022-03-22T17:37:16.000Z | [
"pytorch",
"uk",
"ukrainian",
"license:mit"
] | null | false | ukr-models | null | ukr-models/uk-ner-quantized | 0 | 1 | null | 36,548 | ---
language:
- uk
tags:
- ukrainian
license: mit
---
## Model Description
Quantized version [uk-ner model](https://huggingface.co/ukr-models/uk-ner). Returns B-PER, I-PER, B-LOC, I-LOC, B-ORG, I-ORG tags
## How to Use
After cloning the repository, please use the following code (download script get_predictions.py from the repository, it uses [package tokenize_uk](https://pypi.org/project/tokenize_uk/) for splitting)
```py
from transformers import AutoTokenizer
import torch
from get_predictions import get_word_predictions
tokenizer = AutoTokenizer.from_pretrained("./")
model = torch.load("./pytorch_model.bin")
labels_list = ['O','B-PER','I-PER','B-ORG','I-ORG','B-LOC','I-LOC']
texts = ["ΠΠΎΠ³ΠΈΠ»Π° Π’Π°ΡΠ°ΡΠ° Π¨Π΅Π²ΡΠ΅Π½ΠΊΠ° β ΠΌΡΡΡΠ΅ ΠΏΠΎΡ
ΠΎΠ²Π°Π½Π½Ρ Π²ΠΈΠ΄Π°ΡΠ½ΠΎΠ³ΠΎ ΡΠΊΡΠ°ΡΠ½ΡΡΠΊΠΎΠ³ΠΎ ΠΏΠΎΠ΅ΡΠ° Π’Π°ΡΠ°ΡΠ° Π¨Π΅Π²ΡΠ΅Π½ΠΊΠ° Π² ΠΌΡΡΡΡ ΠΠ°Π½ΡΠ² (Π§Π΅ΡΠΊΠ°ΡΡΠΊΠ° ΠΎΠ±Π»Π°ΡΡΡ) Π½Π° Π§Π΅ΡΠ½Π΅ΡΡΠΉ Π³ΠΎΡΡ, Π½Π°Π΄ ΡΠΊΠΈΠΌ ΡΠ· 1939 ΡΠΎΠΊΡ Π²ΠΈΡΠΎΡΡΡ Π±ΡΠΎΠ½Π·ΠΎΠ²ΠΈΠΉ ΠΏΠ°ΠΌ'ΡΡΠ½ΠΈΠΊ ΡΠΎΠ±ΠΎΡΠΈ ΡΠΊΡΠ»ΡΠΏΡΠΎΡΠ° ΠΠ°ΡΠ²ΡΡ ΠΠ°Π½ΡΠ·Π΅ΡΠ°."]
get_word_predictions(model, tokenizer, texts, labels_list)
```
|
huggingtweets/rebeudeter | 76944e900bd7defcf17bcfc094d90115eec0c9e2 | 2022-03-21T17:55:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/rebeudeter | 0 | null | transformers | 36,549 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1421289007753859077/3X1VHMRx_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Billy βοΈπ§‘</div>
<div style="text-align: center; font-size: 14px;">@rebeudeter</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Billy βοΈπ§‘.
| Data | Billy βοΈπ§‘ |
| --- | --- |
| Tweets downloaded | 3220 |
| Retweets | 363 |
| Short tweets | 205 |
| Tweets kept | 2652 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3mz5i9lj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rebeudeter's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1qau529e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1qau529e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rebeudeter')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ukr-models/uk-morph-quantized | a12725c157526fa38278b9dd112b31e30800e4cc | 2022-03-22T17:29:18.000Z | [
"pytorch",
"uk",
"ukrainian",
"license:mit"
] | null | false | ukr-models | null | ukr-models/uk-morph-quantized | 0 | null | null | 36,550 | ---
language:
- uk
tags:
- ukrainian
license: mit
---
## Model Description
Quantized version [uk-morph model](https://huggingface.co/ukr-models/uk-morph). Returns both UPOS and morphological features (joined by double underscore symbol)
## How to Use
After cloning the repository, please use the following code (download script get_predictions.py from the repository, it uses [package tokenize_uk](https://pypi.org/project/tokenize_uk/) for splitting)
```py
from transformers import AutoTokenizer
import torch
from get_predictions import get_word_predictions
tokenizer = AutoTokenizer.from_pretrained("./")
model = torch.load("./pytorch_model.bin")
with open('./morph_labels.txt', 'r') as labels_file:
labels_list = labels_file.readlines()
labels_list = [label.strip() for label in labels_list]
texts = ["ΠΠΎΠ³ΠΈΠ»Π° Π’Π°ΡΠ°ΡΠ° Π¨Π΅Π²ΡΠ΅Π½ΠΊΠ° β ΠΌΡΡΡΠ΅ ΠΏΠΎΡ
ΠΎΠ²Π°Π½Π½Ρ Π²ΠΈΠ΄Π°ΡΠ½ΠΎΠ³ΠΎ ΡΠΊΡΠ°ΡΠ½ΡΡΠΊΠΎΠ³ΠΎ ΠΏΠΎΠ΅ΡΠ° Π’Π°ΡΠ°ΡΠ° Π¨Π΅Π²ΡΠ΅Π½ΠΊΠ° Π² ΠΌΡΡΡΡ ΠΠ°Π½ΡΠ² (Π§Π΅ΡΠΊΠ°ΡΡΠΊΠ° ΠΎΠ±Π»Π°ΡΡΡ) Π½Π° Π§Π΅ΡΠ½Π΅ΡΡΠΉ Π³ΠΎΡΡ, Π½Π°Π΄ ΡΠΊΠΈΠΌ ΡΠ· 1939 ΡΠΎΠΊΡ Π²ΠΈΡΠΎΡΡΡ Π±ΡΠΎΠ½Π·ΠΎΠ²ΠΈΠΉ ΠΏΠ°ΠΌ'ΡΡΠ½ΠΈΠΊ ΡΠΎΠ±ΠΎΡΠΈ ΡΠΊΡΠ»ΡΠΏΡΠΎΡΠ° ΠΠ°ΡΠ²ΡΡ ΠΠ°Π½ΡΠ·Π΅ΡΠ°."]
get_word_predictions(model, tokenizer, texts, labels_list)
```
|
huggingtweets/elonmusk-garyvee | 88928fcdde48869ffd1447940415455d43ec6f25 | 2022-03-21T19:57:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/elonmusk-garyvee | 0 | null | transformers | 36,551 | ---
language: en
thumbnail: http://www.huggingtweets.com/elonmusk-garyvee/1647892564866/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1503591435324563456/foUrqiEw_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1493524673962852353/qRxbC9Xq_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Gary Vaynerchuk</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-garyvee</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Gary Vaynerchuk.
| Data | Elon Musk | Gary Vaynerchuk |
| --- | --- | --- |
| Tweets downloaded | 2200 | 3247 |
| Retweets | 102 | 712 |
| Short tweets | 671 | 842 |
| Tweets kept | 1427 | 1693 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/abt9l46e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-garyvee's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/4wls4y5v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/4wls4y5v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-garyvee')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kazandaev/opus-mt-en-ru-finetuned-v2 | 41160cf6ad82e56f8c1698870e50268a54af1349 | 2022-03-22T15:25:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | kazandaev | null | kazandaev/opus-mt-en-ru-finetuned-v2 | 0 | null | transformers | 36,552 | ---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-ru-finetuned-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ru-finetuned-v2
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7517
- Bleu: 41.0306
- Gen Len: 29.5078
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Bleu | Gen Len | Validation Loss |
|:-------------:|:-----:|:------:|:-------:|:-------:|:---------------:|
| 0.8091 | 1.0 | 85978 | 39.9389 | 29.6753 | 0.7727 |
| 0.7826 | 2.0 | 171956 | 0.7679 | 40.1955 | 29.5947 |
| 0.7804 | 3.0 | 257934 | 0.7609 | 40.3659 | 29.5642 |
| 0.7695 | 4.0 | 343912 | 0.7551 | 40.7947 | 29.5568 |
| 0.7546 | 5.0 | 429890 | 0.7517 | 41.0306 | 29.5078 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ntoldalagi/C0_LID_DEV | 78039d1646f5ec0eaace16c43e50e25e410582c7 | 2022-03-28T15:46:21.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ntoldalagi | null | ntoldalagi/C0_LID_DEV | 0 | null | transformers | 36,553 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: C0_LID_DEV
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# C0_LID_DEV
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.8267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.0 | 25 | inf | 0.8426 |
| 1.5354 | 0.17 | 2000 | inf | 0.8198 |
| 1.5688 | 0.33 | 4000 | inf | 0.8271 |
| 1.5294 | 0.5 | 6000 | inf | 0.8339 |
| 1.1947 | 0.67 | 8000 | inf | 0.8260 |
| 1.1534 | 0.83 | 10000 | inf | 0.8267 |
| 1.1484 | 1.0 | 12000 | inf | 0.8267 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
lsb/wav2vec2-base-lm-pemlsb-la-v2 | 0f36d0642287f3e247bdfac16ea53bdecd555e2f | 2022-03-21T21:41:31.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"license:agpl-3.0"
] | automatic-speech-recognition | false | lsb | null | lsb/wav2vec2-base-lm-pemlsb-la-v2 | 0 | null | transformers | 36,554 | ---
license: agpl-3.0
---
|
tau/random_1024_0.3_epoch2_v2 | b451308f191f15a82e684be0a4c0473e287c19a0 | 2022-03-22T10:51:54.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/random_1024_0.3_epoch2_v2 | 0 | null | transformers | 36,555 | Entry not found |
tau/t5_1024_0.3_epoch2_v2 | 4aa56399b0c19fae9e70d18dda0f002351275c7c | 2022-03-22T10:56:53.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/t5_1024_0.3_epoch2_v2 | 0 | null | transformers | 36,556 | Entry not found |
tau/t5_lm_1024_0.3_epoch2_v2 | 45ac26c4c1f705c88a39da439d5cf8b9165ab22c | 2022-03-22T11:02:27.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/t5_lm_1024_0.3_epoch2_v2 | 0 | null | transformers | 36,557 | Entry not found |
huggingtweets/laurentozon | 0f0c1be2f210bb6d4760b05b0c8a35bf4ec5ebcd | 2022-03-22T12:21:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/laurentozon | 0 | null | transformers | 36,558 | ---
language: en
thumbnail: http://www.huggingtweets.com/laurentozon/1647951707700/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1505670688635564034/K4L2yhhB_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Laurent Ozon</div>
<div style="text-align: center; font-size: 14px;">@laurentozon</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Laurent Ozon.
| Data | Laurent Ozon |
| --- | --- |
| Tweets downloaded | 3192 |
| Retweets | 753 |
| Short tweets | 382 |
| Tweets kept | 2057 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3uddth9b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @laurentozon's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2dzqbuuu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2dzqbuuu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/laurentozon')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
rahulkuruvilla/COVID-BERTa | 07cec39b45e963d677f0551322f07593b51329f9 | 2022-03-22T22:56:36.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | rahulkuruvilla | null | rahulkuruvilla/COVID-BERTa | 0 | null | transformers | 36,559 | Entry not found |
rahulkuruvilla/COVID-DistilBERTb | bb5583fae97f2f29adecec4f76590bd7765413e1 | 2022-03-22T21:54:46.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | rahulkuruvilla | null | rahulkuruvilla/COVID-DistilBERTb | 0 | null | transformers | 36,560 | Entry not found |
rahulkuruvilla/COVID-BERTb | 38465425fd533c4975c8e0dc2eccf860693ee28e | 2022-03-22T21:57:46.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | rahulkuruvilla | null | rahulkuruvilla/COVID-BERTb | 0 | null | transformers | 36,561 | Entry not found |
rahulkuruvilla/COVID-BERTc | a652cdfc639d1d45cf043c8be31da70a7618f306 | 2022-03-22T22:24:22.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | rahulkuruvilla | null | rahulkuruvilla/COVID-BERTc | 0 | null | transformers | 36,562 | Entry not found |
rahulkuruvilla/COVID-DistilBERTc | af771023a5bc00111d9411b4881517ab081b2cd5 | 2022-03-22T22:28:31.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | rahulkuruvilla | null | rahulkuruvilla/COVID-DistilBERTc | 0 | null | transformers | 36,563 | Entry not found |
mimicheng/codeparrot-ds-sample | f77824dd2887e11a555377a6a8606ebe37de68a1 | 2022-03-23T05:30:38.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | mimicheng | null | mimicheng/codeparrot-ds-sample | 0 | null | transformers | 36,564 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds-sample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-sample
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5057 | 0.93 | 5000 | 1.6003 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
voidful/metaICL_audio_hr_to_lr | f6324c30b4e0961424075e7facb161e6b75cbc0d | 2022-03-23T08:01:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | voidful | null | voidful/metaICL_audio_hr_to_lr | 0 | null | transformers | 36,565 | Entry not found |
huggan/dcgan-mnist | 3e84366820d1e21da74bdbb43ff2beb36163a9d4 | 2022-03-24T14:12:34.000Z | [
"pytorch",
"generic",
"text-to-image"
] | text-to-image | false | huggan | null | huggan/dcgan-mnist | 0 | 1 | generic | 36,566 | ---
tags:
- text-to-image
library_name: generic
---
# Digit generation using DCGAN |
tau/fewsion_single_mask_1024_0.3_epoch1 | 13f46ac084397cf98d1534036537d9b9a9ad4553 | 2022-03-23T12:14:01.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/fewsion_single_mask_1024_0.3_epoch1 | 0 | null | transformers | 36,567 | Entry not found |
tau/t5_single_mask_1024_0.3_epoch1 | 7187956a6b8728358538e09ccd82115a195f2444 | 2022-03-23T12:22:45.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/t5_single_mask_1024_0.3_epoch1 | 0 | null | transformers | 36,568 | Entry not found |
huggan/dcgan-test | 49b50762dad0d9f717c2885cabcf53adb2d2429d | 2022-03-23T15:06:10.000Z | [
"pytorch"
] | null | false | huggan | null | huggan/dcgan-test | 0 | null | null | 36,569 | Entry not found |
pere/test-t5-small-direct | 68f7d6dfdfcc3fd597858fd626e7cdf8d2158036 | 2022-03-23T15:45:31.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | pere | null | pere/test-t5-small-direct | 0 | null | transformers | 36,570 | This is a control model. Converted directly from the original TF dataset format.
````
gsutil cp -R gs://t5-data/pretrained_models/small/ .
wget https://huggingface.co/t5-small/raw/main/config.json
python3 convert_t5_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path "dump/small/" --config_file "config.json" --pytorch_dump_path "/home/perk/dirconv"
``` |
huggingtweets/pierreavdb | 6b9765b70cc5524b369bc92c651316feeec97617 | 2022-03-23T16:50:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/pierreavdb | 0 | null | transformers | 36,571 | ---
language: en
thumbnail: http://www.huggingtweets.com/pierreavdb/1648054135143/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1479780096483512323/LmKFSR3X_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pierre</div>
<div style="text-align: center; font-size: 14px;">@pierreavdb</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pierre.
| Data | Pierre |
| --- | --- |
| Tweets downloaded | 1064 |
| Retweets | 172 |
| Short tweets | 133 |
| Tweets kept | 759 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/21bimkjn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pierreavdb's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ji40nkbv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ji40nkbv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pierreavdb')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/stedmanhalliday | 27a4c69dac9c60bd6ac70d6835abb013dfecb6ef | 2022-03-23T17:16:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/stedmanhalliday | 0 | null | transformers | 36,572 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1500999718331199496/yhpq7J8H_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">SODI</div>
<div style="text-align: center; font-size: 14px;">@stedmanhalliday</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from SODI.
| Data | SODI |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 59 |
| Short tweets | 559 |
| Tweets kept | 2632 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4ry6l5q3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @stedmanhalliday's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1lxo4zkg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1lxo4zkg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/stedmanhalliday')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/metakuna | 95443d72f2a1ddcbf57d6b1cebe8f1b227180d6c | 2022-03-23T17:48:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/metakuna | 0 | null | transformers | 36,573 | ---
language: en
thumbnail: http://www.huggingtweets.com/metakuna/1648057688512/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1493720826935398408/hB4ndxdj_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">metakuna (8/100 blog posts)</div>
<div style="text-align: center; font-size: 14px;">@metakuna</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from metakuna (8/100 blog posts).
| Data | metakuna (8/100 blog posts) |
| --- | --- |
| Tweets downloaded | 3235 |
| Retweets | 242 |
| Short tweets | 524 |
| Tweets kept | 2469 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/9uv1luph/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @metakuna's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1k1mb79h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1k1mb79h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/metakuna')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/rickyflows | 7e9df88c59361ab31e5ff679ef544339f1d99086 | 2022-03-23T18:12:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/rickyflows | 0 | null | transformers | 36,574 | ---
language: en
thumbnail: http://www.huggingtweets.com/rickyflows/1648058984275/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1385231541278855171/lgH-Kso5_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">β ricky flowstate β</div>
<div style="text-align: center; font-size: 14px;">@rickyflows</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from β ricky flowstate β.
| Data | β ricky flowstate β |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 86 |
| Short tweets | 506 |
| Tweets kept | 2657 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/gn0lyrdk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rickyflows's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2fkt1gts) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2fkt1gts/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rickyflows')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/lucca_dev | 09f36b8ae9a36af23c736546d7eb53e5e77578e0 | 2022-03-23T18:20:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/lucca_dev | 0 | null | transformers | 36,575 | ---
language: en
thumbnail: http://www.huggingtweets.com/lucca_dev/1648059357338/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1475818681628246021/sf4z2j_9_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lucca</div>
<div style="text-align: center; font-size: 14px;">@lucca_dev</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lucca.
| Data | Lucca |
| --- | --- |
| Tweets downloaded | 2525 |
| Retweets | 17 |
| Short tweets | 100 |
| Tweets kept | 2408 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3bq4zgob/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lucca_dev's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2kuasht1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2kuasht1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lucca_dev')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/mattiasinspace | 4f2ec557999536e0ec2d59ae6f1f4b057026c30f | 2022-03-23T18:30:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/mattiasinspace | 0 | null | transformers | 36,576 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1434246328788398081/M7Httz0A_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mattias in Deep</div>
<div style="text-align: center; font-size: 14px;">@mattiasinspace</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mattias in Deep.
| Data | Mattias in Deep |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 26 |
| Short tweets | 196 |
| Tweets kept | 3027 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2r9u5eoz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mattiasinspace's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ua0ungm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ua0ungm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mattiasinspace')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/eigenrobot-moridinamael | b1d19fc862520fe0e17991091e8421a38da57c95 | 2022-03-23T18:42:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/eigenrobot-moridinamael | 0 | null | transformers | 36,577 | ---
language: en
thumbnail: http://www.huggingtweets.com/eigenrobot-moridinamael/1648060937936/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/615582548010229761/0zg9awKn_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1492994204758278144/rDnqNReU_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Twisted Mentat Matt & eigenrobot</div>
<div style="text-align: center; font-size: 14px;">@eigenrobot-moridinamael</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Twisted Mentat Matt & eigenrobot.
| Data | Twisted Mentat Matt | eigenrobot |
| --- | --- | --- |
| Tweets downloaded | 3145 | 3247 |
| Retweets | 1670 | 119 |
| Short tweets | 230 | 651 |
| Tweets kept | 1245 | 2477 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3njfftkj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @eigenrobot-moridinamael's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1nbxxa8l) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1nbxxa8l/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/eigenrobot-moridinamael')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/interrogami | a1f8046809b30fc31f3cc9fe11968bc02bb5dcad | 2022-03-23T19:41:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/interrogami | 0 | null | transformers | 36,578 | ---
language: en
thumbnail: http://www.huggingtweets.com/interrogami/1648064415193/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1502292592914046984/F1N4kjHh_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">interrobang</div>
<div style="text-align: center; font-size: 14px;">@interrogami</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from interrobang.
| Data | interrobang |
| --- | --- |
| Tweets downloaded | 1453 |
| Retweets | 20 |
| Short tweets | 139 |
| Tweets kept | 1294 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1awhdfgt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @interrogami's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ibo4fum) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ibo4fum/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/interrogami')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/ryiacy | 9ad1c5e9bb4e5417f9c0509ea72e98330eacf171 | 2022-03-23T19:51:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/ryiacy | 0 | null | transformers | 36,579 | ---
language: en
thumbnail: http://www.huggingtweets.com/ryiacy/1648065062687/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1424813722011410434/73S-oYNT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">cyriac</div>
<div style="text-align: center; font-size: 14px;">@ryiacy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from cyriac.
| Data | cyriac |
| --- | --- |
| Tweets downloaded | 1050 |
| Retweets | 32 |
| Short tweets | 60 |
| Tweets kept | 958 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/26de85bt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ryiacy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2p7goxic) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2p7goxic/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ryiacy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/thanksthoth | 772f89a0ef717ad4b0f463fdd1aab6cfec2be946 | 2022-03-23T20:22:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/thanksthoth | 0 | null | transformers | 36,580 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1477531697814011904/6OQ-pQZG_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rod (ππ)</div>
<div style="text-align: center; font-size: 14px;">@thanksthoth</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rod (ππ).
| Data | Rod (ππ) |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 154 |
| Short tweets | 693 |
| Tweets kept | 2398 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/pd014k0e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @thanksthoth's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/tswc3hnf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/tswc3hnf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/thanksthoth')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sparklyrainbows/DialoGPT-small-harrypotter | 94124b62699bd1ae38f66cbd38bf80f203992cac | 2022-03-23T21:43:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | sparklyrainbows | null | sparklyrainbows/DialoGPT-small-harrypotter | 0 | null | transformers | 36,581 | Entry not found |
negfir/bert_uncased_L-12_H-512_A-8 | 4e79fe73c8dc6679caa8352bae5725aba72d60ef | 2022-04-05T22:13:01.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-12_H-512_A-8 | 0 | null | transformers | 36,582 | Entry not found |
huggingtweets/btohtoh-willitbetoomuch | 9bbaea2a6bd211d3363cf05a9eab5e20efe3bfc9 | 2022-03-24T02:06:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/btohtoh-willitbetoomuch | 0 | null | transformers | 36,583 | ---
language: en
thumbnail: http://www.huggingtweets.com/btohtoh-willitbetoomuch/1648087519902/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1506402743296020484/X79Yfcx5_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1488467916198539265/3pTy_Kr3_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">BToh & unloading</div>
<div style="text-align: center; font-size: 14px;">@btohtoh-willitbetoomuch</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from BToh & unloading.
| Data | BToh | unloading |
| --- | --- | --- |
| Tweets downloaded | 3241 | 85 |
| Retweets | 347 | 0 |
| Short tweets | 480 | 3 |
| Tweets kept | 2414 | 82 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2d3flykp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @btohtoh-willitbetoomuch's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3lp51jew) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3lp51jew/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/btohtoh-willitbetoomuch')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
issue89/DialoGPT-small-house | e0e84860e909b99a5f3954e316a1fc57038a31ba | 2022-03-24T03:48:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | issue89 | null | issue89/DialoGPT-small-house | 0 | null | transformers | 36,584 | ---
tags:
- conversational
---
# House DialoGPT Model |
quincyqiang/chinese-roberta-wwm-ext | 54e43bd61d0885381fc266758278ef1a4fe89ed6 | 2022-03-24T04:58:07.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | quincyqiang | null | quincyqiang/chinese-roberta-wwm-ext | 0 | null | transformers | 36,585 | ---
license: apache-2.0
---
|
huggingtweets/iopred | 3b6d11c2b7ecc43854abf98f9f8426f5da997b2c | 2022-03-24T22:38:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/iopred | 0 | null | transformers | 36,586 | ---
language: en
thumbnail: http://www.huggingtweets.com/iopred/1648161500488/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/804464329202409472/_-74eUkS_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">diet dr. kit</div>
<div style="text-align: center; font-size: 14px;">@iopred</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from diet dr. kit.
| Data | diet dr. kit |
| --- | --- |
| Tweets downloaded | 3240 |
| Retweets | 177 |
| Short tweets | 258 |
| Tweets kept | 2805 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/52vmud4n/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iopred's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2i464eff) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2i464eff/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/iopred')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/tariqnasheed | e673fd9cdcd8b60175aab3b284e9ac8e9ecd8c6f | 2022-03-24T08:54:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/tariqnasheed | 0 | null | transformers | 36,587 | ---
language: en
thumbnail: http://www.huggingtweets.com/tariqnasheed/1648112086220/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1506809010988539910/bBCRvJ4K_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tariq Nasheed πΊπΈ</div>
<div style="text-align: center; font-size: 14px;">@tariqnasheed</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tariq Nasheed πΊπΈ.
| Data | Tariq Nasheed πΊπΈ |
| --- | --- |
| Tweets downloaded | 3235 |
| Retweets | 273 |
| Short tweets | 396 |
| Tweets kept | 2566 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/f1jq7tem/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tariqnasheed's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2dn7iubq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2dn7iubq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tariqnasheed')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/kytalli-vi0linheart | bd0faba430abf54cd876e82f3835418ce4877891 | 2022-03-24T09:38:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/kytalli-vi0linheart | 0 | null | transformers | 36,588 | ---
language: en
thumbnail: http://www.huggingtweets.com/kytalli-vi0linheart/1648114676311/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1500859213622300673/izXwf0KK_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1376749372831002627/2B9FZTnI_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">sal & G</div>
<div style="text-align: center; font-size: 14px;">@kytalli-vi0linheart</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from sal & G.
| Data | sal | G |
| --- | --- | --- |
| Tweets downloaded | 3114 | 3249 |
| Retweets | 421 | 55 |
| Short tweets | 541 | 226 |
| Tweets kept | 2152 | 2968 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1tj76wad/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kytalli-vi0linheart's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1a1bludi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1a1bludi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kytalli-vi0linheart')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/madeleine | 7586f4090ee9c321c375970b419d4c10703ac135 | 2022-03-24T09:38:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/madeleine | 0 | null | transformers | 36,589 | ---
language: en
thumbnail: http://www.huggingtweets.com/madeleine/1648114714373/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1227670393453936642/6rdB_DqU_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Madeleine Albright</div>
<div style="text-align: center; font-size: 14px;">@madeleine</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Madeleine Albright.
| Data | Madeleine Albright |
| --- | --- |
| Tweets downloaded | 1111 |
| Retweets | 249 |
| Short tweets | 3 |
| Tweets kept | 859 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2a3z3e8y/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @madeleine's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2q01k6dh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2q01k6dh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/madeleine')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/vi0linheart | a405f60b1b4f15025ad4f25f2b610463ded90208 | 2022-03-24T10:11:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/vi0linheart | 0 | null | transformers | 36,590 | ---
language: en
thumbnail: http://www.huggingtweets.com/vi0linheart/1648116634962/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1500859213622300673/izXwf0KK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">sal</div>
<div style="text-align: center; font-size: 14px;">@vi0linheart</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from sal.
| Data | sal |
| --- | --- |
| Tweets downloaded | 3114 |
| Retweets | 421 |
| Short tweets | 541 |
| Tweets kept | 2152 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/21y9qo98/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vi0linheart's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3t019c6m) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3t019c6m/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vi0linheart')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/rronigj | bfee78bd061fce8f33e65629f3e9459ef26dbd1c | 2022-03-24T12:47:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/rronigj | 0 | null | transformers | 36,591 | ---
language: en
thumbnail: http://www.huggingtweets.com/rronigj/1648126016294/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1251916496307175424/rFilH506_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rron Gjinovci</div>
<div style="text-align: center; font-size: 14px;">@rronigj</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rron Gjinovci.
| Data | Rron Gjinovci |
| --- | --- |
| Tweets downloaded | 173 |
| Retweets | 45 |
| Short tweets | 24 |
| Tweets kept | 104 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/33ceg6s6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rronigj's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3nokbt1r) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3nokbt1r/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rronigj')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
negfir/bert_uncased_L-10_H-768_A-12 | 2ca221427dbe1605765307e3fb44eebf9d1fe247 | 2022-04-05T23:33:07.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-10_H-768_A-12 | 0 | null | transformers | 36,592 | Entry not found |
huggingtweets/untiltrees | 345f74628fdda66d019e784199b235edb8db07f8 | 2022-03-24T16:08:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/untiltrees | 0 | null | transformers | 36,593 | ---
language: en
thumbnail: http://www.huggingtweets.com/untiltrees/1648138126631/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1350186722596974593/lANAV_Xj_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dancing Box</div>
<div style="text-align: center; font-size: 14px;">@untiltrees</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dancing Box.
| Data | Dancing Box |
| --- | --- |
| Tweets downloaded | 994 |
| Retweets | 41 |
| Short tweets | 91 |
| Tweets kept | 862 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/36kia24g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @untiltrees's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/8md8jogv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/8md8jogv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/untiltrees')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/janieclone-wretched_worm | 40a44774f610da6c3bfd701071a75ebc0b018a8e | 2022-03-24T16:50:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/janieclone-wretched_worm | 0 | null | transformers | 36,594 | ---
language: en
thumbnail: http://www.huggingtweets.com/janieclone-wretched_worm/1648140650284/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1478043369578266624/vWL3TXE0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1504460028270501895/uqbdF11C_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wretched worm & Columbine Janie</div>
<div style="text-align: center; font-size: 14px;">@janieclone-wretched_worm</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wretched worm & Columbine Janie.
| Data | wretched worm | Columbine Janie |
| --- | --- | --- |
| Tweets downloaded | 3226 | 544 |
| Retweets | 313 | 197 |
| Short tweets | 572 | 60 |
| Tweets kept | 2341 | 287 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3jmx6vuf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @janieclone-wretched_worm's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/kpqts6sn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/kpqts6sn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/janieclone-wretched_worm')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
pere/tt5-base | a63a43f839e6e4449541329ec960e1bc819119e9 | 2022-03-24T20:53:03.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | pere | null | pere/tt5-base | 0 | null | transformers | 36,595 | Entry not found |
pere/tt5-3B | 544a050ba01cb72bfb70efb3b5dc05811ad9ab27 | 2022-03-24T20:55:28.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | pere | null | pere/tt5-3B | 0 | null | transformers | 36,596 | Entry not found |
vumichien/albert-base-v2 | 30da5ca6ce61f6ddc66e33b979ed5935bbe7cda0 | 2022-03-25T00:30:34.000Z | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | vumichien | null | vumichien/albert-base-v2 | 0 | null | transformers | 36,597 | Entry not found |
Jezia/pytorch-pretrained-BigGAN | d2036299cae6f42dec12156892e38480d62af49b | 2022-03-25T10:53:53.000Z | [
"dataset:ImageNet",
"pytorch",
"biggan",
"license:apache-2.0"
] | null | false | Jezia | null | Jezia/pytorch-pretrained-BigGAN | 0 | null | pytorch | 36,598 | ---
license: apache-2.0
library_name: pytorch
tags:
- biggan
datasets:
- ImageNet
---
## Model description
This is an op-for-op PyTorch reimplementation of DeepMind's BigGAN model with the pre-trained weights from DeepMind [biggan-deep-128](https://tfhub.dev/deepmind/biggan-deep-128/1).
## Training and evaluation data
Model is trained on [ImageNet dataset](https://tfhub.dev/s?dataset=imagenet-ilsvrc-2012-cls). The dataset consists of 10000 classes. All images are resized to 64 * 64 for the sake of convenience. The model takes noise as input and then Conv2DTranspose is used to do upsampling. The output shape consists of 128, 256, or 512 images depending on the model.
## How to use this model
You can use this model to generate new images.
```
import torch
from pytorch_pretrained_biggan import (BigGAN, one_hot_from_names, truncated_noise_sample,
save_as_images, display_in_terminal)
model = BigGAN.from_pretrained('biggan-deep-256')
```
You can generate examples using a noise vector.
```
with torch.no_grad():
output = model(noise_vector, class_vector, truncation)
```
## Intended use and biases
This model is not intended for production.
### Generated images

### Credits
@thomwolf
Thomas Wolf
@vfdev-5
vfdev |
scasutt/wav2vec2-base_toy_train_data_augment_0.1.csv | c6dfdd82117b962619af359e515c6a8395f34813 | 2022-03-25T11:45:10.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | scasutt | null | scasutt/wav2vec2-base_toy_train_data_augment_0.1.csv | 0 | null | transformers | 36,599 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base_toy_train_data_augment_0.1.csv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_toy_train_data_augment_0.1.csv
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3933
- Wer: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.2787 | 0.84 | 200 | 3.5920 | 1.0 |
| 3.0613 | 1.68 | 400 | 3.4069 | 1.0 |
| 3.0481 | 2.52 | 600 | 3.4811 | 1.0 |
| 2.896 | 3.36 | 800 | 2.3933 | 0.9997 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.