modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
β | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
β | likes
float64 0
712
β | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
erickfm/t5-small-finetuned-bias-v8 | 7abddfbe4f6bf8bbf0d7615361d775192ec15980 | 2022-06-07T21:38:20.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-small-finetuned-bias-v8 | 1 | null | transformers | 32,700 | Entry not found |
erickfm/t5-small-finetuned-bias-sweep-08544cdb | 0bd29a22b264930c1c923f2430219d021f792235 | 2022-06-07T21:45:21.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-small-finetuned-bias-sweep-08544cdb | 1 | null | transformers | 32,701 | Entry not found |
simonnedved/bert-seg-with-cf | a877c7cf2674cd7fcf7cd74f762871fed0c69cb5 | 2022-06-08T00:34:12.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | simonnedved | null | simonnedved/bert-seg-with-cf | 1 | null | transformers | 32,702 | ---
license: apache-2.0
---
|
twieland/VN_ja-en_byt5 | 9af0576322d906731d1446bec13b3758d77a6451 | 2022-06-08T01:42:43.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | twieland | null | twieland/VN_ja-en_byt5 | 1 | null | transformers | 32,703 | Entry not found |
Lekshmiprabha/opus-mt-en-ro-finetuned-en-to-ro | 7a1a290f4b192e7e3949fc546ffafd16fdc37af8 | 2022-06-08T03:36:47.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Lekshmiprabha | null | Lekshmiprabha/opus-mt-en-ro-finetuned-en-to-ro | 1 | null | transformers | 32,704 | Entry not found |
erickfm/t5-small-finetuned-bias-sweep-cb55d551 | 74e3a07861d187a086937026aa857d9a4a30f40d | 2022-06-08T01:30:00.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-small-finetuned-bias-sweep-cb55d551 | 1 | null | transformers | 32,705 | Entry not found |
twieland/VN_ja-en_byt5_small | aa3faf6eefff530eea94e6d6447e2280f0b8627b | 2022-06-08T14:53:19.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | twieland | null | twieland/VN_ja-en_byt5_small | 1 | null | transformers | 32,706 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: VN_ja-en_byt5_small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VN_ja-en_byt5_small
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1687 | 0.1 | 2000 | 1.1805 |
| 0.9685 | 0.19 | 4000 | 1.1384 |
| 0.8989 | 0.29 | 6000 | 1.1207 |
| 0.8583 | 0.39 | 8000 | 1.1046 |
| 0.833 | 0.49 | 10000 | 1.1290 |
| 0.8102 | 0.58 | 12000 | 1.1225 |
| 0.7932 | 0.68 | 14000 | 1.0956 |
| 0.7776 | 0.78 | 16000 | 1.0970 |
| 0.762 | 0.88 | 18000 | 1.0992 |
| 0.7522 | 0.97 | 20000 | 1.0760 |
| 0.7318 | 1.07 | 22000 | 1.0579 |
| 0.7197 | 1.17 | 24000 | 1.0780 |
| 0.7142 | 1.27 | 26000 | 1.0748 |
| 0.7093 | 1.36 | 28000 | 1.0781 |
| 0.7005 | 1.46 | 30000 | 1.0756 |
| 0.6938 | 1.56 | 32000 | 1.0702 |
| 0.6896 | 1.65 | 34000 | 1.0563 |
| 0.6846 | 1.75 | 36000 | 1.0603 |
| 0.6807 | 1.85 | 38000 | 1.0626 |
| 0.6766 | 1.95 | 40000 | 1.0666 |
| 0.6649 | 2.04 | 42000 | 1.0694 |
| 0.6532 | 2.14 | 44000 | 1.0564 |
| 0.6501 | 2.24 | 46000 | 1.0715 |
| 0.6476 | 2.34 | 48000 | 1.0551 |
| 0.646 | 2.43 | 50000 | 1.0601 |
| 0.6445 | 2.53 | 52000 | 1.0595 |
| 0.6404 | 2.63 | 54000 | 1.0494 |
| 0.6378 | 2.72 | 56000 | 1.0584 |
| 0.636 | 2.82 | 58000 | 1.0531 |
| 0.6345 | 2.92 | 60000 | 1.0552 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/_pancagkes | 7a55e82fbe3ad9e9ae72a665c25294dc7b5a7367 | 2022-06-08T02:40:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/_pancagkes | 1 | null | transformers | 32,707 | ---
language: en
thumbnail: http://www.huggingtweets.com/_pancagkes/1654655985301/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1525194520970899457/uqCAbAl__400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">carlala</div>
<div style="text-align: center; font-size: 14px;">@_pancagkes</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from carlala.
| Data | carlala |
| --- | --- |
| Tweets downloaded | 3096 |
| Retweets | 2299 |
| Short tweets | 253 |
| Tweets kept | 544 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/w3ejvw24/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_pancagkes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1e8xcsmm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1e8xcsmm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/_pancagkes')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
erickfm/t5-small-finetuned-bias-sweep-c649f8e9 | 9cb389d652ab22b62b7f4b5c47a2d08a1c2824a6 | 2022-06-08T03:19:37.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-small-finetuned-bias-sweep-c649f8e9 | 1 | null | transformers | 32,708 | Entry not found |
erickfm/t5-small-finetuned-bias-sweep-85ba4637 | 17c81f2826c3cd3f30aa0ec6916122101aad8dff | 2022-06-08T03:42:03.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-small-finetuned-bias-sweep-85ba4637 | 1 | null | transformers | 32,709 | Entry not found |
nloc2578/3rd | c68e7be87d8cadbf07f49f5f8af6d4a32af706fe | 2022-06-08T09:03:38.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | nloc2578 | null | nloc2578/3rd | 1 | null | transformers | 32,710 | ---
tags:
- generated_from_trainer
model-index:
- name: 3rd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 3rd
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8129
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0015
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.1114 | 0.18 | 1500 | 3.0346 |
| 3.0808 | 0.36 | 3000 | 2.9687 |
| 2.9443 | 0.54 | 4500 | 2.9548 |
| 2.9606 | 0.72 | 6000 | 2.8818 |
| 2.9475 | 0.9 | 7500 | 2.8668 |
| 2.4882 | 1.08 | 9000 | 2.8979 |
| 2.5669 | 1.26 | 10500 | 2.8673 |
| 2.5047 | 1.44 | 12000 | 2.8176 |
| 2.5524 | 1.62 | 13500 | 2.8458 |
| 2.5275 | 1.8 | 15000 | 2.7372 |
| 2.4982 | 1.98 | 16500 | 2.7297 |
| 1.9936 | 2.16 | 18000 | 2.7922 |
| 2.0063 | 2.34 | 19500 | 2.7160 |
| 1.9143 | 2.52 | 21000 | 2.7135 |
| 1.9644 | 2.7 | 22500 | 2.6860 |
| 1.9235 | 2.88 | 24000 | 2.6462 |
| 1.381 | 3.06 | 25500 | 2.8203 |
| 1.3569 | 3.24 | 27000 | 2.8321 |
| 1.4043 | 3.42 | 28500 | 2.8262 |
| 1.365 | 3.6 | 30000 | 2.8376 |
| 1.3719 | 3.78 | 31500 | 2.8236 |
| 1.3408 | 3.96 | 33000 | 2.8129 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.9_topk50_epoch3 | 0258aebb167f27bc4b46f4aa5cd521831ec3c879 | 2022-06-08T06:18:28.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.9_topk50_epoch3 | 1 | null | transformers | 32,711 | Entry not found |
PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.9_topk40_epoch3 | 1c9bf0b42e118a9bfb69186639dc29f590ae06df | 2022-06-08T07:46:29.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.9_topk40_epoch3 | 1 | null | transformers | 32,712 | Entry not found |
PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.9_topk30_epoch3 | 8fe35f1b571210d57c7b61bd7974b653e80211a7 | 2022-06-08T09:15:39.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.9_topk30_epoch3 | 1 | null | transformers | 32,713 | Entry not found |
Jawaher/Covid19-fake-news-bert-uncased | d1ee887678274c7ce5315856fbe3fe384b958aee | 2022-06-08T11:02:09.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Jawaher | null | Jawaher/Covid19-fake-news-bert-uncased | 1 | null | transformers | 32,714 | Domain adaptation is the process of fine-tuning pre-trained language models (PLMs) on domain-specific datasets to produce predictions that are better suited to the new datasets. Here, we re-train the BERT-base-uncased model on an unlabelled COVID-19 fake news dataset (Constraint@AAAI2021) using the masked language modeling (MLM) objective, where 15% of input text is masked, and the model is expected to predict the masked tokens. |
huggingtweets/conspiracymill | 298d9834b2948c961c7b91d33da0047899709855 | 2022-06-08T10:46:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/conspiracymill | 1 | null | transformers | 32,715 | ---
language: en
thumbnail: http://www.huggingtweets.com/conspiracymill/1654685163989/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1447765226376638469/EuvZlKan_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Conspiracy Mill</div>
<div style="text-align: center; font-size: 14px;">@conspiracymill</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Conspiracy Mill.
| Data | Conspiracy Mill |
| --- | --- |
| Tweets downloaded | 3196 |
| Retweets | 626 |
| Short tweets | 869 |
| Tweets kept | 1701 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2yowpn7j/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @conspiracymill's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/39srf3ca) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/39srf3ca/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/conspiracymill')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
roscazo/covid-model | b6abd1c83ad9653db3800bf9b35f5392c1c0de98 | 2022-06-08T11:11:58.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | roscazo | null | roscazo/covid-model | 1 | null | transformers | 32,716 | Entry not found |
PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.8_topk50_epoch3 | 55657bffc8024c86eebbcb4aafffa6e2013bbd5d | 2022-06-08T11:52:19.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.8_topk50_epoch3 | 1 | null | transformers | 32,717 | Entry not found |
oftshsl/t5_ua_gec | 72896eff252e0b91b0503fd60e2635716d2e2a59 | 2022-06-08T13:37:47.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:other",
"autotrain_compatible"
] | text2text-generation | false | oftshsl | null | oftshsl/t5_ua_gec | 1 | null | transformers | 32,718 | ---
license: other
---
|
ctoraman/RoBERTweetTurkCovid | f1b27a1cea91de913cd8ff10225d50151d6538a8 | 2022-06-19T14:25:58.000Z | [
"pytorch",
"roberta",
"fill-mask",
"tr",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | ctoraman | null | ctoraman/RoBERTweetTurkCovid | 1 | null | transformers | 32,719 | ---
language:
- tr
tags:
- roberta
license: cc-by-nc-sa-4.0
---
# RoBERTweetTurkCovid (uncased)
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
The pretrained corpus is a Turkish tweets collection related to COVID-19.
Model architecture is similar to RoBERTa-base (12 layers, 12 heads, and 768 hidden size). Tokenization algorithm is WordPiece. Vocabulary size is 30k.
The details of pretraining can be found at this paper:
```bibtex
@InProceedings{clef-checkthat:2022:task1:oguzhan,
author = {Cagri Toraman and Oguzhan Ozcelik and Furkan ΕahinuΓ§ and Umitcan Sahin},
title = "{ARC-NLP at CheckThat! 2022:} Contradiction for Harmful Tweet Detection",
year = {2022},
booktitle = "Working Notes of {CLEF} 2022 - Conference and Labs of the Evaluation Forum",
editor = {Faggioli, Guglielmo andd Ferro, Nicola and Hanbury, Allan and Potthast, Martin},
series = {CLEF~'2022},
address = {Bologna, Italy},
}
```
The following code can be used for model loading and tokenization, example max length (768) can be changed:
```
model = AutoModel.from_pretrained([model_path])
#for sequence classification:
#model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes])
tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path])
tokenizer.mask_token = "[MASK]"
tokenizer.cls_token = "[CLS]"
tokenizer.sep_token = "[SEP]"
tokenizer.pad_token = "[PAD]"
tokenizer.unk_token = "[UNK]"
tokenizer.bos_token = "[CLS]"
tokenizer.eos_token = "[SEP]"
tokenizer.model_max_length = 768
```
### BibTeX entry and citation info
```bibtex
@InProceedings{clef-checkthat:2022:task1:oguzhan,
author = {Cagri Toraman and Oguzhan Ozcelik and Furkan ΕahinuΓ§ and Umitcan Sahin},
title = "{ARC-NLP at CheckThat! 2022:} Contradiction for Harmful Tweet Detection",
year = {2022},
booktitle = "Working Notes of {CLEF} 2022 - Conference and Labs of the Evaluation Forum",
editor = {Faggioli, Guglielmo andd Ferro, Nicola and Hanbury, Allan and Potthast, Martin},
series = {CLEF~'2022},
address = {Bologna, Italy},
}
```
|
PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.8_topk40_epoch3 | b55ee0308a895e49de7b10e5826136bdcf2f47a8 | 2022-06-08T13:21:35.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.8_topk40_epoch3 | 1 | null | transformers | 32,720 | Entry not found |
FabianWillner/distilbert-base-uncased-finetuned-triviaqa-finetuned-squad | ef03bbbe3920c502559a8c3e4b8749fc9eac824d | 2022-06-08T15:46:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | FabianWillner | null | FabianWillner/distilbert-base-uncased-finetuned-triviaqa-finetuned-squad | 1 | null | transformers | 32,721 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-triviaqa-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-triviaqa-finetuned-squad
This model is a fine-tuned version of [FabianWillner/distilbert-base-uncased-finetuned-triviaqa](https://huggingface.co/FabianWillner/distilbert-base-uncased-finetuned-triviaqa) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2153 | 1.0 | 5533 | 1.1555 |
| 0.9614 | 2.0 | 11066 | 1.1417 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
cutten/wav2vec2-large-multilang-cv-ru-night | 4ae74601571b5fd85b938486fb4e05509ac8846a | 2022-06-08T19:58:05.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | cutten | null | cutten/wav2vec2-large-multilang-cv-ru-night | 1 | null | transformers | 32,722 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-multilang-cv-ru-night
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-multilang-cv-ru-night
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6617
- Wer: 0.5097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 8.725 | 1.58 | 500 | 3.2788 | 1.0 |
| 3.1184 | 3.15 | 1000 | 2.4018 | 1.0015 |
| 1.2393 | 4.73 | 1500 | 0.6213 | 0.7655 |
| 0.6899 | 6.31 | 2000 | 0.5518 | 0.6811 |
| 0.5532 | 7.89 | 2500 | 0.5102 | 0.6467 |
| 0.4604 | 9.46 | 3000 | 0.4887 | 0.6213 |
| 0.4095 | 11.04 | 3500 | 0.4874 | 0.6042 |
| 0.3565 | 12.62 | 4000 | 0.4810 | 0.5893 |
| 0.3238 | 14.2 | 4500 | 0.5028 | 0.5890 |
| 0.3011 | 15.77 | 5000 | 0.5475 | 0.5808 |
| 0.2827 | 17.35 | 5500 | 0.5289 | 0.5720 |
| 0.2659 | 18.93 | 6000 | 0.5496 | 0.5733 |
| 0.2445 | 20.5 | 6500 | 0.5354 | 0.5737 |
| 0.2366 | 22.08 | 7000 | 0.5357 | 0.5686 |
| 0.2181 | 23.66 | 7500 | 0.5491 | 0.5611 |
| 0.2146 | 25.24 | 8000 | 0.5591 | 0.5597 |
| 0.2006 | 26.81 | 8500 | 0.5625 | 0.5631 |
| 0.1912 | 28.39 | 9000 | 0.5577 | 0.5647 |
| 0.1821 | 29.97 | 9500 | 0.5684 | 0.5519 |
| 0.1744 | 31.55 | 10000 | 0.5639 | 0.5551 |
| 0.1691 | 33.12 | 10500 | 0.5596 | 0.5425 |
| 0.1577 | 34.7 | 11000 | 0.5770 | 0.5551 |
| 0.1522 | 36.28 | 11500 | 0.5634 | 0.5560 |
| 0.1468 | 37.85 | 12000 | 0.5815 | 0.5453 |
| 0.1508 | 39.43 | 12500 | 0.6053 | 0.5490 |
| 0.1394 | 41.01 | 13000 | 0.6193 | 0.5504 |
| 0.1291 | 42.59 | 13500 | 0.5930 | 0.5424 |
| 0.1345 | 44.16 | 14000 | 0.6283 | 0.5442 |
| 0.1296 | 45.74 | 14500 | 0.6063 | 0.5560 |
| 0.1286 | 47.32 | 15000 | 0.6248 | 0.5378 |
| 0.1231 | 48.9 | 15500 | 0.6106 | 0.5405 |
| 0.1189 | 50.47 | 16000 | 0.6164 | 0.5342 |
| 0.1127 | 52.05 | 16500 | 0.6269 | 0.5359 |
| 0.112 | 53.63 | 17000 | 0.6170 | 0.5390 |
| 0.1113 | 55.21 | 17500 | 0.6489 | 0.5385 |
| 0.1023 | 56.78 | 18000 | 0.6826 | 0.5490 |
| 0.1069 | 58.36 | 18500 | 0.6147 | 0.5296 |
| 0.1008 | 59.94 | 19000 | 0.6414 | 0.5332 |
| 0.1018 | 61.51 | 19500 | 0.6454 | 0.5288 |
| 0.0989 | 63.09 | 20000 | 0.6603 | 0.5303 |
| 0.0944 | 64.67 | 20500 | 0.6350 | 0.5288 |
| 0.0905 | 66.25 | 21000 | 0.6386 | 0.5247 |
| 0.0837 | 67.82 | 21500 | 0.6563 | 0.5298 |
| 0.0868 | 69.4 | 22000 | 0.6375 | 0.5208 |
| 0.0827 | 70.98 | 22500 | 0.6401 | 0.5271 |
| 0.0797 | 72.56 | 23000 | 0.6723 | 0.5191 |
| 0.0847 | 74.13 | 23500 | 0.6610 | 0.5213 |
| 0.0818 | 75.71 | 24000 | 0.6774 | 0.5254 |
| 0.0793 | 77.29 | 24500 | 0.6543 | 0.5250 |
| 0.0758 | 78.86 | 25000 | 0.6607 | 0.5218 |
| 0.0755 | 80.44 | 25500 | 0.6599 | 0.5160 |
| 0.0722 | 82.02 | 26000 | 0.6683 | 0.5196 |
| 0.0714 | 83.6 | 26500 | 0.6941 | 0.5180 |
| 0.0684 | 85.17 | 27000 | 0.6581 | 0.5167 |
| 0.0686 | 86.75 | 27500 | 0.6651 | 0.5172 |
| 0.0712 | 88.33 | 28000 | 0.6547 | 0.5208 |
| 0.0697 | 89.91 | 28500 | 0.6555 | 0.5162 |
| 0.0696 | 91.48 | 29000 | 0.6678 | 0.5107 |
| 0.0686 | 93.06 | 29500 | 0.6630 | 0.5124 |
| 0.0671 | 94.64 | 30000 | 0.6675 | 0.5143 |
| 0.0668 | 96.21 | 30500 | 0.6602 | 0.5107 |
| 0.0666 | 97.79 | 31000 | 0.6611 | 0.5097 |
| 0.0664 | 99.37 | 31500 | 0.6617 | 0.5097 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.8_topk30_epoch3 | 8994790a59f71d5b53511c7cb0c9fef4dcf74b2d | 2022-06-08T14:52:01.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.8_topk30_epoch3 | 1 | null | transformers | 32,723 | Entry not found |
erickfm/t5-base-finetuned-bias-sweep-82cfb803 | 2b5ad37f21b7d5a0d292f20a77a2e270a2eaadfc | 2022-06-08T15:43:13.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-base-finetuned-bias-sweep-82cfb803 | 1 | null | transformers | 32,724 | Entry not found |
PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.8_topk20_epoch3 | 1772f1783c999e7c0f486c74353ff46339549051 | 2022-06-08T16:21:13.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.8_topk20_epoch3 | 1 | null | transformers | 32,725 | Entry not found |
Vkt/model-960hfacebook-2022.06.08 | fe20f0ff3050b6afa618508c3bb90aa148fe8e0c | 2022-06-15T18:17:56.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Vkt | null | Vkt/model-960hfacebook-2022.06.08 | 1 | null | transformers | 32,726 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: model-960hfacebook-2022.06.08
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model-960hfacebook-2022.06.08
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2907
- Wer: 0.1804
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.7634 | 0.21 | 300 | 2.9743 | 0.9998 |
| 1.6536 | 0.43 | 600 | 0.8605 | 0.7529 |
| 0.9823 | 0.64 | 900 | 0.6600 | 0.6286 |
| 0.8708 | 0.86 | 1200 | 0.5780 | 0.5736 |
| 0.7878 | 1.07 | 1500 | 0.5386 | 0.5326 |
| 0.7033 | 1.29 | 1800 | 0.4986 | 0.4992 |
| 0.681 | 1.5 | 2100 | 0.4575 | 0.4778 |
| 0.6537 | 1.72 | 2400 | 0.4591 | 0.4482 |
| 0.6263 | 1.93 | 2700 | 0.4317 | 0.4353 |
| 0.5811 | 2.14 | 3000 | 0.4149 | 0.4159 |
| 0.5565 | 2.36 | 3300 | 0.4170 | 0.3956 |
| 0.5501 | 2.57 | 3600 | 0.4007 | 0.3929 |
| 0.5444 | 2.79 | 3900 | 0.3930 | 0.3851 |
| 0.5177 | 3.0 | 4200 | 0.4006 | 0.3630 |
| 0.4682 | 3.22 | 4500 | 0.3707 | 0.3713 |
| 0.4805 | 3.43 | 4800 | 0.3564 | 0.3583 |
| 0.4715 | 3.65 | 5100 | 0.3596 | 0.3434 |
| 0.4482 | 3.86 | 5400 | 0.3555 | 0.3394 |
| 0.4407 | 4.07 | 5700 | 0.3680 | 0.3312 |
| 0.4134 | 4.29 | 6000 | 0.3534 | 0.3328 |
| 0.4165 | 4.5 | 6300 | 0.3294 | 0.3259 |
| 0.4196 | 4.72 | 6600 | 0.3353 | 0.3214 |
| 0.4117 | 4.93 | 6900 | 0.3266 | 0.3211 |
| 0.3847 | 5.15 | 7200 | 0.3365 | 0.3156 |
| 0.3687 | 5.36 | 7500 | 0.3233 | 0.3014 |
| 0.376 | 5.58 | 7800 | 0.3345 | 0.2979 |
| 0.3732 | 5.79 | 8100 | 0.3105 | 0.2882 |
| 0.3705 | 6.0 | 8400 | 0.3252 | 0.2935 |
| 0.3311 | 6.22 | 8700 | 0.3266 | 0.2911 |
| 0.3386 | 6.43 | 9000 | 0.2975 | 0.2765 |
| 0.337 | 6.65 | 9300 | 0.3070 | 0.2826 |
| 0.3458 | 6.86 | 9600 | 0.3090 | 0.2766 |
| 0.3218 | 7.08 | 9900 | 0.3117 | 0.2748 |
| 0.3041 | 7.29 | 10200 | 0.2989 | 0.2651 |
| 0.3031 | 7.51 | 10500 | 0.3210 | 0.2672 |
| 0.3037 | 7.72 | 10800 | 0.3040 | 0.2667 |
| 0.3126 | 7.93 | 11100 | 0.2867 | 0.2613 |
| 0.3005 | 8.15 | 11400 | 0.3075 | 0.2610 |
| 0.2802 | 8.36 | 11700 | 0.3129 | 0.2608 |
| 0.2785 | 8.58 | 12000 | 0.3002 | 0.2579 |
| 0.2788 | 8.79 | 12300 | 0.3063 | 0.2476 |
| 0.286 | 9.01 | 12600 | 0.2971 | 0.2495 |
| 0.2534 | 9.22 | 12900 | 0.2766 | 0.2452 |
| 0.2542 | 9.44 | 13200 | 0.2893 | 0.2405 |
| 0.2576 | 9.65 | 13500 | 0.3038 | 0.2518 |
| 0.2552 | 9.86 | 13800 | 0.2851 | 0.2429 |
| 0.2487 | 10.08 | 14100 | 0.2858 | 0.2356 |
| 0.2441 | 10.29 | 14400 | 0.2999 | 0.2364 |
| 0.2345 | 10.51 | 14700 | 0.2907 | 0.2373 |
| 0.2352 | 10.72 | 15000 | 0.2885 | 0.2402 |
| 0.2464 | 10.94 | 15300 | 0.2896 | 0.2339 |
| 0.2219 | 11.15 | 15600 | 0.2999 | 0.2351 |
| 0.2257 | 11.37 | 15900 | 0.2930 | 0.2326 |
| 0.2184 | 11.58 | 16200 | 0.2980 | 0.2353 |
| 0.2182 | 11.79 | 16500 | 0.2832 | 0.2296 |
| 0.2224 | 12.01 | 16800 | 0.2797 | 0.2285 |
| 0.1991 | 12.22 | 17100 | 0.2810 | 0.2296 |
| 0.1993 | 12.44 | 17400 | 0.2949 | 0.2253 |
| 0.2042 | 12.65 | 17700 | 0.2864 | 0.2207 |
| 0.2083 | 12.87 | 18000 | 0.2860 | 0.2278 |
| 0.1998 | 13.08 | 18300 | 0.2872 | 0.2232 |
| 0.1919 | 13.3 | 18600 | 0.2894 | 0.2247 |
| 0.1925 | 13.51 | 18900 | 0.3007 | 0.2234 |
| 0.1966 | 13.72 | 19200 | 0.2831 | 0.2176 |
| 0.1942 | 13.94 | 19500 | 0.2811 | 0.2161 |
| 0.1778 | 14.15 | 19800 | 0.2901 | 0.2196 |
| 0.1755 | 14.37 | 20100 | 0.2864 | 0.2188 |
| 0.1795 | 14.58 | 20400 | 0.2927 | 0.2170 |
| 0.1817 | 14.8 | 20700 | 0.2846 | 0.2156 |
| 0.1754 | 15.01 | 21000 | 0.3036 | 0.2137 |
| 0.1674 | 15.23 | 21300 | 0.2876 | 0.2156 |
| 0.171 | 15.44 | 21600 | 0.2812 | 0.2106 |
| 0.1603 | 15.65 | 21900 | 0.2692 | 0.2093 |
| 0.1663 | 15.87 | 22200 | 0.2745 | 0.2094 |
| 0.1608 | 16.08 | 22500 | 0.2807 | 0.2043 |
| 0.1555 | 16.3 | 22800 | 0.2872 | 0.2036 |
| 0.1546 | 16.51 | 23100 | 0.2837 | 0.2049 |
| 0.1515 | 16.73 | 23400 | 0.2746 | 0.2031 |
| 0.1571 | 16.94 | 23700 | 0.2767 | 0.2047 |
| 0.1498 | 17.16 | 24000 | 0.2837 | 0.2050 |
| 0.143 | 17.37 | 24300 | 0.2745 | 0.2038 |
| 0.1471 | 17.58 | 24600 | 0.2787 | 0.2004 |
| 0.1442 | 17.8 | 24900 | 0.2779 | 0.2005 |
| 0.1481 | 18.01 | 25200 | 0.2906 | 0.2021 |
| 0.1318 | 18.23 | 25500 | 0.2936 | 0.1991 |
| 0.1396 | 18.44 | 25800 | 0.2913 | 0.1984 |
| 0.144 | 18.66 | 26100 | 0.2806 | 0.1953 |
| 0.1341 | 18.87 | 26400 | 0.2896 | 0.1972 |
| 0.1375 | 19.09 | 26700 | 0.2937 | 0.2002 |
| 0.1286 | 19.3 | 27000 | 0.2929 | 0.1954 |
| 0.1242 | 19.51 | 27300 | 0.2968 | 0.1962 |
| 0.1305 | 19.73 | 27600 | 0.2879 | 0.1944 |
| 0.1287 | 19.94 | 27900 | 0.2850 | 0.1937 |
| 0.1286 | 20.16 | 28200 | 0.2910 | 0.1961 |
| 0.121 | 20.37 | 28500 | 0.2908 | 0.1912 |
| 0.1264 | 20.59 | 28800 | 0.2853 | 0.1904 |
| 0.1238 | 20.8 | 29100 | 0.2913 | 0.1926 |
| 0.117 | 21.02 | 29400 | 0.2907 | 0.1922 |
| 0.1154 | 21.23 | 29700 | 0.2902 | 0.1888 |
| 0.1142 | 21.44 | 30000 | 0.2854 | 0.1907 |
| 0.1168 | 21.66 | 30300 | 0.2918 | 0.1873 |
| 0.1168 | 21.87 | 30600 | 0.2897 | 0.1873 |
| 0.1105 | 22.09 | 30900 | 0.2951 | 0.1856 |
| 0.1134 | 22.3 | 31200 | 0.2842 | 0.1847 |
| 0.1111 | 22.52 | 31500 | 0.2884 | 0.1829 |
| 0.1088 | 22.73 | 31800 | 0.2991 | 0.1840 |
| 0.1139 | 22.94 | 32100 | 0.2876 | 0.1839 |
| 0.1078 | 23.16 | 32400 | 0.2899 | 0.1830 |
| 0.1087 | 23.37 | 32700 | 0.2927 | 0.1803 |
| 0.1076 | 23.59 | 33000 | 0.2924 | 0.1801 |
| 0.11 | 23.8 | 33300 | 0.2877 | 0.1804 |
| 0.1067 | 24.02 | 33600 | 0.2918 | 0.1799 |
| 0.1104 | 24.23 | 33900 | 0.2908 | 0.1809 |
| 0.1023 | 24.45 | 34200 | 0.2939 | 0.1807 |
| 0.0993 | 24.66 | 34500 | 0.2925 | 0.1802 |
| 0.1053 | 24.87 | 34800 | 0.2907 | 0.1804 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.1+cu111
- Datasets 2.2.1
- Tokenizers 0.12.1
|
mischi001/bert-base-uncased-gu-128 | 141785b90561462e9f6649a797386f35e8986619 | 2022-06-08T16:28:30.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | mischi001 | null | mischi001/bert-base-uncased-gu-128 | 1 | null | transformers | 32,727 | ---
license: apache-2.0
---
|
PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.7_topk50_epoch3 | 9521367a7fcce07c370330ee0a2b037f9b0ca010 | 2022-06-08T17:51:23.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.7_topk50_epoch3 | 1 | null | transformers | 32,728 | Entry not found |
victorlee071200/distilbert-base-cased-finetuned-squad_v2 | 8ab917ff6ffac95b40b4c4ee78824129ddf1ba6b | 2022-06-09T07:51:00.000Z | [
"pytorch",
"distilbert",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | victorlee071200 | null | victorlee071200/distilbert-base-cased-finetuned-squad_v2 | 1 | null | transformers | 32,729 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-cased-finetuned-squad_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-finetuned-squad_v2
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2416 | 1.0 | 8255 | 1.2973 |
| 0.9689 | 2.0 | 16510 | 1.3242 |
| 0.7803 | 3.0 | 24765 | 1.4225 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
erickfm/t5-base-finetuned-bias-sweep-240a1767 | 0dfa0f370ea46fd175ca27d7dbac6e0fcdfaf9c7 | 2022-06-08T18:16:43.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-base-finetuned-bias-sweep-240a1767 | 1 | null | transformers | 32,730 | Entry not found |
CataME/tp_nlp_Robertuito | 7ff115c815dcb201510eb5753c76122e324217a4 | 2022-06-08T18:56:13.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | CataME | null | CataME/tp_nlp_Robertuito | 1 | null | transformers | 32,731 | Entry not found |
simecek/DNADebertaSmall | 4cbb4b0e1f70771daf4b5e0486a91a552f7b1ea6 | 2022-06-09T17:44:29.000Z | [
"pytorch",
"tensorboard",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simecek | null | simecek/DNADebertaSmall | 1 | null | transformers | 32,732 | Entry not found |
CataME/tp_nlp_Ruperta | f4aaaea53c966af4c6b70e1a8183cccc7637c504 | 2022-06-08T21:36:00.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | CataME | null | CataME/tp_nlp_Ruperta | 1 | null | transformers | 32,733 | Entry not found |
Vlasta/humandna_deberta_default_empty_stud_8442 | e0947fd2730d41b0b030d315e6bafbfd8f8b1355 | 2022-06-08T21:39:20.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Vlasta | null | Vlasta/humandna_deberta_default_empty_stud_8442 | 1 | null | transformers | 32,734 | Entry not found |
meghazisofiane/opus-mt-en-ar-finetuned-en-to-ar-test2-instances | 15f0afbe7de199aebcdda8adf730e2d5527c17ca | 2022-06-08T23:32:45.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:un_multi",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | meghazisofiane | null | meghazisofiane/opus-mt-en-ar-finetuned-en-to-ar-test2-instances | 1 | null | transformers | 32,735 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- un_multi
model-index:
- name: opus-mt-en-ar-finetuned-en-to-ar-test2-instances
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ar-finetuned-en-to-ar-test2-instances
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the un_multi dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 1 | 0.8295 | 66.2993 | 37.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
CataME/tp_nlp_Bertin | 965c6f48d0fb0c3d9d245c8430b06849afad4a07 | 2022-06-09T00:16:55.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | CataME | null | CataME/tp_nlp_Bertin | 1 | null | transformers | 32,736 | Entry not found |
erickfm/t5-base-finetuned-bias-sweep-21d27db3 | 07feac6da6c57cbbafac1f91ad317aa16e5ef1f5 | 2022-06-09T00:16:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-base-finetuned-bias-sweep-21d27db3 | 1 | null | transformers | 32,737 | Entry not found |
valhalla/ldm-bert | afbadf8b80ed8e51a9eacd27ffadedf68b23f294 | 2022-06-09T02:01:28.000Z | [
"pytorch",
"ldmbert",
"transformers"
] | null | false | valhalla | null | valhalla/ldm-bert | 1 | null | transformers | 32,738 | Entry not found |
Vlasta/humandna_bert_default | abb5a36ac4d0e5f5464588782419fb71fc9bdb2e | 2022-06-09T02:31:30.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Vlasta | null | Vlasta/humandna_bert_default | 1 | null | transformers | 32,739 | Entry not found |
crystina-z/mdpr-tied-mmarco-ru | be7688c4e202a4d7ec7c3b055278cf502cdfc3ec | 2022-06-09T05:56:09.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | crystina-z | null | crystina-z/mdpr-tied-mmarco-ru | 1 | null | transformers | 32,740 | Entry not found |
twieland/SUBTITLE_ja-en_helsinki | 4ff1d591ac2ec1e312c4ab1632462c51e4f4a2e1 | 2022-06-09T10:23:09.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | twieland | null | twieland/SUBTITLE_ja-en_helsinki | 1 | null | transformers | 32,741 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: SUBTITLE_ja-en_helsinki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SUBTITLE_ja-en_helsinki
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.025 | 0.05 | 2000 | 5.1692 |
| 2.9548 | 0.09 | 4000 | 5.7128 |
| 2.8762 | 0.14 | 6000 | 5.9297 |
| 2.821 | 0.18 | 8000 | 6.0415 |
| 2.7826 | 0.23 | 10000 | 6.0416 |
| 2.7386 | 0.27 | 12000 | 6.0069 |
| 2.7036 | 0.32 | 14000 | 6.0192 |
| 2.678 | 0.37 | 16000 | 5.9286 |
| 2.6499 | 0.41 | 18000 | 5.9587 |
| 2.6261 | 0.46 | 20000 | 5.9044 |
| 2.6032 | 0.5 | 22000 | 5.8482 |
| 2.5708 | 0.55 | 24000 | 5.7760 |
| 2.5517 | 0.59 | 26000 | 5.7546 |
| 2.5336 | 0.64 | 28000 | 5.7447 |
| 2.5196 | 0.69 | 30000 | 5.7373 |
| 2.4957 | 0.73 | 32000 | 5.6429 |
| 2.483 | 0.78 | 34000 | 5.6874 |
| 2.4599 | 0.82 | 36000 | 5.6482 |
| 2.4468 | 0.87 | 38000 | 5.5951 |
| 2.4344 | 0.92 | 40000 | 5.6355 |
| 2.4223 | 0.96 | 42000 | 5.6135 |
| 2.3878 | 1.01 | 44000 | 5.6164 |
| 2.294 | 1.05 | 46000 | 5.5802 |
| 2.2896 | 1.1 | 48000 | 5.5924 |
| 2.2815 | 1.14 | 50000 | 5.5296 |
| 2.2702 | 1.19 | 52000 | 5.5119 |
| 2.2741 | 1.24 | 54000 | 5.4775 |
| 2.2586 | 1.28 | 56000 | 5.4663 |
| 2.2492 | 1.33 | 58000 | 5.4764 |
| 2.2411 | 1.37 | 60000 | 5.4444 |
| 2.2275 | 1.42 | 62000 | 5.4566 |
| 2.218 | 1.46 | 64000 | 5.4845 |
| 2.2086 | 1.51 | 66000 | 5.4681 |
| 2.1976 | 1.56 | 68000 | 5.4775 |
| 2.1877 | 1.6 | 70000 | 5.4619 |
| 2.177 | 1.65 | 72000 | 5.4621 |
| 2.1722 | 1.69 | 74000 | 5.4322 |
| 2.1599 | 1.74 | 76000 | 5.4348 |
| 2.1475 | 1.78 | 78000 | 5.4432 |
| 2.1477 | 1.83 | 80000 | 5.4239 |
| 2.134 | 1.88 | 82000 | 5.4182 |
| 2.1302 | 1.92 | 84000 | 5.4089 |
| 2.125 | 1.97 | 86000 | 5.4097 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Vlasta/humandna_distillbert_default_ | 9f8e60fa5f32758d4144825b540577a2616ee840 | 2022-06-09T08:31:24.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Vlasta | null | Vlasta/humandna_distillbert_default_ | 1 | null | transformers | 32,742 | Entry not found |
Vlasta/humandna_distillbert_default_dual_liability_4383 | ec04244f907f16519910b70e372d2956a04f283b | 2022-06-09T08:31:49.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Vlasta | null | Vlasta/humandna_distillbert_default_dual_liability_4383 | 1 | null | transformers | 32,743 | Entry not found |
RuiqianLi/wav2vec2-xls-r-300m_Mrbrown_finetune1 | cfcf73152dd78b85c7a5ef2fa417625324c677d3 | 2022-06-10T03:17:06.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:uob_singlish",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | RuiqianLi | null | RuiqianLi/wav2vec2-xls-r-300m_Mrbrown_finetune1 | 1 | null | transformers | 32,744 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- uob_singlish
model-index:
- name: wav2vec2-xls-r-300m_Mrbrown_finetune1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m_Mrbrown_finetune1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the uob_singlish dataset.
## This time use self-made dataset(cut the audio of "https://www.youtube.com/watch?v=a2ZOTD3R7JI" into slices and write the corresponding transcript, totally 4 mins), don't know why the word-error-rate keep 1. But can know that much be the problem of dataset, because last time use the same pre-trained model and standard singlish corpus fine-tune get nice result. (can find it at:RuiqianLi/wav2vec2-large-xls-r-300m-singlish-colab)
It achieves the following results on the evaluation set:
- Loss: 3.0927
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.7943 | 20.0 | 200 | 3.0597 | 1.0 |
| 2.9902 | 40.0 | 400 | 3.1604 | 1.0 |
| 2.9696 | 60.0 | 600 | 3.1112 | 1.0 |
| 2.8885 | 80.0 | 800 | 3.0234 | 1.0 |
| 2.8154 | 100.0 | 1000 | 3.0927 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ghadeermobasher/WLT-BlueBERT-BC5CDR-Disease | 8453f07ad180599df64e22bd436c085db41d3636 | 2022-06-09T11:18:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/WLT-BlueBERT-BC5CDR-Disease | 1 | null | transformers | 32,745 | Entry not found |
Dewone/wav2vec2-base-timit-demo-google-colab | 0b3f9ead19550135f924c5057bada164a3644475 | 2022-06-09T12:37:08.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Dewone | null | Dewone/wav2vec2-base-timit-demo-google-colab | 1 | null | transformers | 32,746 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5182
- Wer: 0.3329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5177 | 1.0 | 500 | 1.8932 | 0.9837 |
| 0.854 | 2.01 | 1000 | 0.5295 | 0.5266 |
| 0.4205 | 3.01 | 1500 | 0.4299 | 0.4453 |
| 0.2934 | 4.02 | 2000 | 0.3940 | 0.4180 |
| 0.2272 | 5.02 | 2500 | 0.4269 | 0.4149 |
| 0.1856 | 6.02 | 3000 | 0.4277 | 0.4335 |
| 0.1668 | 7.03 | 3500 | 0.4214 | 0.3852 |
| 0.1388 | 8.03 | 4000 | 0.4410 | 0.3805 |
| 0.1254 | 9.04 | 4500 | 0.4152 | 0.3716 |
| 0.1073 | 10.04 | 5000 | 0.4257 | 0.3726 |
| 0.1 | 11.04 | 5500 | 0.4405 | 0.3642 |
| 0.0928 | 12.05 | 6000 | 0.4823 | 0.3708 |
| 0.0829 | 13.05 | 6500 | 0.4636 | 0.3548 |
| 0.0682 | 14.06 | 7000 | 0.4718 | 0.3599 |
| 0.0643 | 15.06 | 7500 | 0.4965 | 0.3583 |
| 0.0609 | 16.06 | 8000 | 0.5279 | 0.3576 |
| 0.0586 | 17.07 | 8500 | 0.4869 | 0.3528 |
| 0.055 | 18.07 | 9000 | 0.4671 | 0.3567 |
| 0.0465 | 19.08 | 9500 | 0.5090 | 0.3508 |
| 0.0432 | 20.08 | 10000 | 0.5024 | 0.3543 |
| 0.0427 | 21.08 | 10500 | 0.4658 | 0.3417 |
| 0.033 | 22.09 | 11000 | 0.5276 | 0.3418 |
| 0.0297 | 23.09 | 11500 | 0.5095 | 0.3415 |
| 0.0317 | 24.1 | 12000 | 0.5061 | 0.3364 |
| 0.0262 | 25.1 | 12500 | 0.4910 | 0.3367 |
| 0.0257 | 26.1 | 13000 | 0.4869 | 0.3331 |
| 0.0237 | 27.11 | 13500 | 0.5023 | 0.3333 |
| 0.0228 | 28.11 | 14000 | 0.5131 | 0.3333 |
| 0.021 | 29.12 | 14500 | 0.5182 | 0.3329 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
huggingtweets/aylesim | 5bfa4b047729d385973edacb1549ee008092aceb | 2022-06-09T11:10:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/aylesim | 1 | null | transformers | 32,747 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1513156868612448256/2nXWRcn5_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">mira</div>
<div style="text-align: center; font-size: 14px;">@aylesim</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from mira.
| Data | mira |
| --- | --- |
| Tweets downloaded | 3215 |
| Retweets | 255 |
| Short tweets | 765 |
| Tweets kept | 2195 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3buhour0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aylesim's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/c2a7aq5o) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/c2a7aq5o/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/aylesim')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ghadeermobasher/WLT-SciBERT-BC5CDR-Chemical | 3342380c5d728bc9a3326d27d942f04bfb4e08e0 | 2022-06-09T12:07:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/WLT-SciBERT-BC5CDR-Chemical | 1 | null | transformers | 32,748 | Entry not found |
Vlasta/humandna_distillbert_random_systematic_walrus_56 | 4300e06b27effe51bf990abc34dbac899cb91564 | 2022-06-09T12:24:02.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Vlasta | null | Vlasta/humandna_distillbert_random_systematic_walrus_56 | 1 | null | transformers | 32,749 | Entry not found |
twieland/MIX1_ja-en_helsinki | 19e7728b31402b7e41a816f0ab69881448e4ff2b | 2022-06-10T05:49:30.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | twieland | null | twieland/MIX1_ja-en_helsinki | 1 | null | transformers | 32,750 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MIX1_ja-en_helsinki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MIX1_ja-en_helsinki
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on a combination of Visual Novel, Light Novel, and Subtitle data. A total of ~10MM lines of training data were used.
It achieves the following results on the evaluation set:
- Loss: 1.7947
- Otaku Benchmark VN BLEU: 17.78
- Otaku Benchmark LN BLEU: 11.80
- Otaku Benchmark MANGA BLEU: 13.66
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.7495 | 0.01 | 2000 | 2.5989 |
| 2.5415 | 0.03 | 4000 | 2.4746 |
| 2.4409 | 0.04 | 6000 | 2.4731 |
| 2.3743 | 0.05 | 8000 | 2.4012 |
| 2.3254 | 0.06 | 10000 | 2.3904 |
| 2.2857 | 0.08 | 12000 | 2.3649 |
| 2.2448 | 0.09 | 14000 | 2.3188 |
| 2.2158 | 0.1 | 16000 | 2.2975 |
| 2.193 | 0.11 | 18000 | 2.2756 |
| 2.1669 | 0.13 | 20000 | 2.2852 |
| 2.144 | 0.14 | 22000 | 2.2689 |
| 2.1222 | 0.15 | 24000 | 2.2721 |
| 2.1045 | 0.16 | 26000 | 2.2489 |
| 2.0885 | 0.18 | 28000 | 2.2359 |
| 2.0732 | 0.19 | 30000 | 2.2771 |
| 2.0584 | 0.2 | 32000 | 2.2582 |
| 2.0471 | 0.21 | 34000 | 2.2093 |
| 2.0369 | 0.23 | 36000 | 2.1768 |
| 2.0241 | 0.24 | 38000 | 2.1884 |
| 2.0196 | 0.25 | 40000 | 2.2025 |
| 2.004 | 0.27 | 42000 | 2.1507 |
| 1.9936 | 0.28 | 44000 | 2.1668 |
| 1.9869 | 0.29 | 46000 | 2.1432 |
| 1.9735 | 0.3 | 48000 | 2.1662 |
| 1.9651 | 0.32 | 50000 | 2.1824 |
| 1.9551 | 0.33 | 52000 | 2.1608 |
| 1.9485 | 0.34 | 54000 | 2.1322 |
| 1.9421 | 0.35 | 56000 | 2.1476 |
| 1.9303 | 0.37 | 58000 | 2.0994 |
| 1.9236 | 0.38 | 60000 | 2.1182 |
| 1.9183 | 0.39 | 62000 | 2.1305 |
| 1.9108 | 0.4 | 64000 | 2.1469 |
| 1.9051 | 0.42 | 66000 | 2.1414 |
| 1.9018 | 0.43 | 68000 | 2.1089 |
| 1.8959 | 0.44 | 70000 | 2.0908 |
| 1.886 | 0.46 | 72000 | 2.0968 |
| 1.8802 | 0.47 | 74000 | 2.0503 |
| 1.8713 | 0.48 | 76000 | 2.0542 |
| 1.8648 | 0.49 | 78000 | 2.0990 |
| 1.8599 | 0.51 | 80000 | 2.1112 |
| 1.8563 | 0.52 | 82000 | 2.1007 |
| 1.8541 | 0.53 | 84000 | 2.0849 |
| 1.845 | 0.54 | 86000 | 2.0831 |
| 1.8448 | 0.56 | 88000 | 2.0560 |
| 1.8342 | 0.57 | 90000 | 2.0349 |
| 1.8344 | 0.58 | 92000 | 2.0301 |
| 1.8291 | 0.59 | 94000 | 2.0300 |
| 1.819 | 0.61 | 96000 | 2.0378 |
| 1.8154 | 0.62 | 98000 | 2.0197 |
| 1.82 | 0.63 | 100000 | 2.0463 |
| 1.8081 | 0.64 | 102000 | 2.0077 |
| 1.8046 | 0.66 | 104000 | 2.0101 |
| 1.7978 | 0.67 | 106000 | 2.0150 |
| 1.7934 | 0.68 | 108000 | 2.0215 |
| 1.7904 | 0.7 | 110000 | 2.0278 |
| 1.7871 | 0.71 | 112000 | 2.0588 |
| 1.779 | 0.72 | 114000 | 2.0062 |
| 1.7784 | 0.73 | 116000 | 2.0300 |
| 1.7749 | 0.75 | 118000 | 1.9664 |
| 1.7691 | 0.76 | 120000 | 2.0033 |
| 1.7622 | 0.77 | 122000 | 1.9983 |
| 1.7587 | 0.78 | 124000 | 2.0030 |
| 1.755 | 0.8 | 126000 | 1.9955 |
| 1.7531 | 0.81 | 128000 | 1.9764 |
| 1.7439 | 0.82 | 130000 | 1.9942 |
| 1.7406 | 0.83 | 132000 | 2.0221 |
| 1.7385 | 0.85 | 134000 | 1.9835 |
| 1.7332 | 0.86 | 136000 | 1.9967 |
| 1.7332 | 0.87 | 138000 | 2.0247 |
| 1.7309 | 0.88 | 140000 | 1.9817 |
| 1.7248 | 0.9 | 142000 | 2.0063 |
| 1.7209 | 0.91 | 144000 | 1.9583 |
| 1.7154 | 0.92 | 146000 | 1.9779 |
| 1.7153 | 0.94 | 148000 | 1.9478 |
| 1.7094 | 0.95 | 150000 | 1.9706 |
| 1.7061 | 0.96 | 152000 | 1.9605 |
| 1.7017 | 0.97 | 154000 | 1.9447 |
| 1.6965 | 0.99 | 156000 | 1.9419 |
| 1.6929 | 1.0 | 158000 | 1.9589 |
| 1.6628 | 1.01 | 160000 | 1.9383 |
| 1.6535 | 1.02 | 162000 | 1.9487 |
| 1.6495 | 1.04 | 164000 | 1.9400 |
| 1.6516 | 1.05 | 166000 | 1.9353 |
| 1.6513 | 1.06 | 168000 | 1.9253 |
| 1.6518 | 1.07 | 170000 | 1.9132 |
| 1.6491 | 1.09 | 172000 | 1.9076 |
| 1.6453 | 1.1 | 174000 | 1.9192 |
| 1.6426 | 1.11 | 176000 | 1.9191 |
| 1.6353 | 1.13 | 178000 | 1.9367 |
| 1.6352 | 1.14 | 180000 | 1.9218 |
| 1.6304 | 1.15 | 182000 | 1.9305 |
| 1.6299 | 1.16 | 184000 | 1.9072 |
| 1.6263 | 1.18 | 186000 | 1.9211 |
| 1.6284 | 1.19 | 188000 | 1.9037 |
| 1.6237 | 1.2 | 190000 | 1.8951 |
| 1.6231 | 1.21 | 192000 | 1.8998 |
| 1.6184 | 1.23 | 194000 | 1.8960 |
| 1.6153 | 1.24 | 196000 | 1.8776 |
| 1.6122 | 1.25 | 198000 | 1.8747 |
| 1.6109 | 1.26 | 200000 | 1.8951 |
| 1.6072 | 1.28 | 202000 | 1.8705 |
| 1.6094 | 1.29 | 204000 | 1.8903 |
| 1.6063 | 1.3 | 206000 | 1.8660 |
| 1.599 | 1.31 | 208000 | 1.8696 |
| 1.5931 | 1.33 | 210000 | 1.8598 |
| 1.5943 | 1.34 | 212000 | 1.8760 |
| 1.5906 | 1.35 | 214000 | 1.8833 |
| 1.5858 | 1.37 | 216000 | 1.8645 |
| 1.5873 | 1.38 | 218000 | 1.8620 |
| 1.5842 | 1.39 | 220000 | 1.8632 |
| 1.5808 | 1.4 | 222000 | 1.8782 |
| 1.5756 | 1.42 | 224000 | 1.8627 |
| 1.5728 | 1.43 | 226000 | 1.8649 |
| 1.5709 | 1.44 | 228000 | 1.8735 |
| 1.5704 | 1.45 | 230000 | 1.8630 |
| 1.5659 | 1.47 | 232000 | 1.8598 |
| 1.5637 | 1.48 | 234000 | 1.8519 |
| 1.5628 | 1.49 | 236000 | 1.8569 |
| 1.5559 | 1.5 | 238000 | 1.8401 |
| 1.5532 | 1.52 | 240000 | 1.8528 |
| 1.557 | 1.53 | 242000 | 1.8637 |
| 1.5499 | 1.54 | 244000 | 1.8701 |
| 1.5476 | 1.55 | 246000 | 1.8423 |
| 1.5502 | 1.57 | 248000 | 1.8320 |
| 1.5469 | 1.58 | 250000 | 1.8542 |
| 1.5382 | 1.59 | 252000 | 1.8526 |
| 1.5396 | 1.61 | 254000 | 1.8537 |
| 1.528 | 1.62 | 256000 | 1.8248 |
| 1.532 | 1.63 | 258000 | 1.8322 |
| 1.5269 | 1.64 | 260000 | 1.8381 |
| 1.5269 | 1.66 | 262000 | 1.8389 |
| 1.5269 | 1.67 | 264000 | 1.8445 |
| 1.525 | 1.68 | 266000 | 1.8232 |
| 1.5175 | 1.69 | 268000 | 1.8561 |
| 1.5172 | 1.71 | 270000 | 1.8342 |
| 1.5174 | 1.72 | 272000 | 1.8167 |
| 1.5114 | 1.73 | 274000 | 1.8281 |
| 1.5094 | 1.74 | 276000 | 1.8164 |
| 1.5083 | 1.76 | 278000 | 1.8317 |
| 1.5047 | 1.77 | 280000 | 1.8207 |
| 1.5045 | 1.78 | 282000 | 1.8155 |
| 1.497 | 1.8 | 284000 | 1.8275 |
| 1.4996 | 1.81 | 286000 | 1.8152 |
| 1.497 | 1.82 | 288000 | 1.8137 |
| 1.4967 | 1.83 | 290000 | 1.8109 |
| 1.4936 | 1.85 | 292000 | 1.8037 |
| 1.4867 | 1.86 | 294000 | 1.7955 |
| 1.4859 | 1.87 | 296000 | 1.8181 |
| 1.4869 | 1.88 | 298000 | 1.7999 |
| 1.4811 | 1.9 | 300000 | 1.8062 |
| 1.4831 | 1.91 | 302000 | 1.8042 |
| 1.4791 | 1.92 | 304000 | 1.8020 |
| 1.4797 | 1.93 | 306000 | 1.7972 |
| 1.483 | 1.95 | 308000 | 1.8044 |
| 1.4748 | 1.96 | 310000 | 1.8036 |
| 1.4772 | 1.97 | 312000 | 1.7958 |
| 1.4708 | 1.98 | 314000 | 1.7967 |
| 1.4743 | 2.0 | 316000 | 1.7947 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
vesteinn/icebert-xlmr-ic3-iec | 6fbdd3cb4c1aaf5ffd3c64182521b7f676ec26a4 | 2022-06-09T14:29:05.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
] | token-classification | false | vesteinn | null | vesteinn/icebert-xlmr-ic3-iec | 1 | null | transformers | 32,751 | ---
license: cc-by-4.0
---
|
flood/xlm-roberta-base-finetuned-panx-en | 563b1b98cf1a6301975cddc87c00e2d750559925 | 2022-06-22T13:43:46.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | flood | null | flood/xlm-roberta-base-finetuned-panx-en | 1 | null | transformers | 32,752 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6777777777777778
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4025
- F1: 0.6778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1069 | 1.0 | 50 | 0.5201 | 0.5010 |
| 0.4975 | 2.0 | 100 | 0.4503 | 0.6198 |
| 0.3705 | 3.0 | 150 | 0.4025 | 0.6778 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
roshnir/mBert-finetuned-mlqa-dev-en-zh-hi | be00e7308a2936fc63e4e2b1f38f04d4ef4d8f4b | 2022-06-09T18:32:18.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | roshnir | null | roshnir/mBert-finetuned-mlqa-dev-en-zh-hi | 1 | null | transformers | 32,753 | Entry not found |
ajsmith201/t5-small-finetuned-bias-267d8789 | 8e9df5cd78d738b8d8581517f2f414be8d6a5726 | 2022-06-09T20:15:51.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ajsmith201 | null | ajsmith201/t5-small-finetuned-bias-267d8789 | 1 | null | transformers | 32,754 | Entry not found |
simecek/MouseDNADeberta | 243650849ec2f220c9aaa84378dd2024199c92b8 | 2022-06-09T23:58:58.000Z | [
"pytorch",
"tensorboard",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simecek | null | simecek/MouseDNADeberta | 1 | null | transformers | 32,755 | Entry not found |
simecek/FruitflyDNADeberta | 81280fb594fa5d6aa9b88677280397564073d39d | 2022-06-10T00:39:45.000Z | [
"pytorch",
"tensorboard",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simecek | null | simecek/FruitflyDNADeberta | 1 | null | transformers | 32,756 | Entry not found |
lak/poem_project_1 | b489192b188ced70249fe27d0450a3803f98c2de | 2022-06-09T20:41:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | lak | null | lak/poem_project_1 | 1 | null | transformers | 32,757 | Entry not found |
Vlasta/humandna_Electra_random | 88f4a945b27120720fdddb80fa2be6694f0797b6 | 2022-06-09T21:32:22.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Vlasta | null | Vlasta/humandna_Electra_random | 1 | null | transformers | 32,758 | Entry not found |
nthakur/contriever-base-msmarco | 39068b4625fd866fc9f65a7689bfb4604e3ab5dd | 2022-06-09T22:01:51.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | nthakur | null | nthakur/contriever-base-msmarco | 1 | null | sentence-transformers | 32,759 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# nthakur/contriever-base-msmarco
This is a port of the [Contriever MSMARCO Model](https://huggingface.co/facebook/contriever-msmarco) to [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('nthakur/contriever-base-msmarco')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('nthakur/contriever-base-msmarco')
model = AutoModel.from_pretrained('nthakur/contriever-base-msmarco')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=nthakur/contriever-base-msmarco)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 509, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
Have a look at: [Contriever Model](https://github.com/facebookresearch/contriever).
<!--- Describe where people can find more information --> |
huggingtweets/wick_is_tired | 1e1663bac357edd13bf17c172c30524f6e13edfd | 2022-06-10T01:42:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/wick_is_tired | 1 | null | transformers | 32,760 | ---
language: en
thumbnail: http://www.huggingtweets.com/wick_is_tired/1654825353897/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1381121023567917058/JyYfOsKC_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">IntroWick</div>
<div style="text-align: center; font-size: 14px;">@wick_is_tired</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from IntroWick.
| Data | IntroWick |
| --- | --- |
| Tweets downloaded | 257 |
| Retweets | 29 |
| Short tweets | 77 |
| Tweets kept | 151 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/az5xmdyn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wick_is_tired's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/lxj96tnp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/lxj96tnp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wick_is_tired')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
NadiaSan/udesa-model-aah-es-20k | a067c6fffe5b0229dab336e53c2510a5291f291b | 2022-06-10T01:50:39.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | NadiaSan | null | NadiaSan/udesa-model-aah-es-20k | 1 | null | transformers | 32,761 | Entry not found |
enoriega/rule_learning_margin_1mm | b2ff12bcb27fbd494cf5eab74c8a182ea027ccf1 | 2022-06-11T02:04:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"dataset:enoriega/odinsynth_dataset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | null | false | enoriega | null | enoriega/rule_learning_margin_1mm | 1 | null | transformers | 32,762 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- enoriega/odinsynth_dataset
model-index:
- name: rule_learning_margin_1mm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rule_learning_margin_1mm
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the enoriega/odinsynth_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3806
- Margin Accuracy: 0.8239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2000
- total_train_batch_size: 8000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Margin Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|
| 0.6482 | 0.16 | 20 | 0.6494 | 0.7263 |
| 0.5151 | 0.32 | 40 | 0.5088 | 0.7792 |
| 0.4822 | 0.48 | 60 | 0.4429 | 0.8045 |
| 0.4472 | 0.64 | 80 | 0.4265 | 0.8107 |
| 0.4352 | 0.8 | 100 | 0.4155 | 0.8132 |
| 0.4335 | 0.96 | 120 | 0.4128 | 0.8116 |
| 0.4113 | 1.12 | 140 | 0.4119 | 0.8142 |
| 0.4186 | 1.28 | 160 | 0.4075 | 0.8120 |
| 0.42 | 1.44 | 180 | 0.4072 | 0.8123 |
| 0.4175 | 1.6 | 200 | 0.4080 | 0.8130 |
| 0.4097 | 1.76 | 220 | 0.4031 | 0.8128 |
| 0.397 | 1.92 | 240 | 0.4004 | 0.8130 |
| 0.4115 | 2.08 | 260 | 0.3979 | 0.8136 |
| 0.4108 | 2.24 | 280 | 0.3940 | 0.8167 |
| 0.4125 | 2.4 | 300 | 0.3879 | 0.8218 |
| 0.4117 | 2.56 | 320 | 0.3848 | 0.8217 |
| 0.3967 | 2.72 | 340 | 0.3818 | 0.8231 |
| 0.3947 | 2.88 | 360 | 0.3813 | 0.8240 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
huggingtweets/wickdedaccount | 1d92ae3987b04ae5ae5f8172b9b004f381d65c56 | 2022-06-10T02:20:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/wickdedaccount | 1 | null | transformers | 32,763 | ---
language: en
thumbnail: http://www.huggingtweets.com/wickdedaccount/1654827628283/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1353151127026597889/Yarj5Kfr_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">pp</div>
<div style="text-align: center; font-size: 14px;">@wickdedaccount</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from pp.
| Data | pp |
| --- | --- |
| Tweets downloaded | 1028 |
| Retweets | 822 |
| Short tweets | 119 |
| Tweets kept | 87 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1of8kmw1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wickdedaccount's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2q4m95l8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2q4m95l8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wickdedaccount')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/loganpaul | 037dd662c698e54be89720d7a9839420ecf488c2 | 2022-06-10T02:29:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/loganpaul | 1 | null | transformers | 32,764 | ---
language: en
thumbnail: http://www.huggingtweets.com/loganpaul/1654828143127/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1401837042934468611/okzqIoMb_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Logan Paul</div>
<div style="text-align: center; font-size: 14px;">@loganpaul</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Logan Paul.
| Data | Logan Paul |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 170 |
| Short tweets | 318 |
| Tweets kept | 2757 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wj9pph5f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @loganpaul's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1sqzuxgo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1sqzuxgo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/loganpaul')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
simecek/humandna_DEBERTASMALL_1epoch | 245df0039ab266f58fb30363eaee208cd7f6544d | 2022-06-10T02:45:42.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simecek | null | simecek/humandna_DEBERTASMALL_1epoch | 1 | null | transformers | 32,765 | Entry not found |
ajsmith201/t5-large-finetuned-bias-2e10ce74 | 1971225d2ecb2d12e0c43eba6a6931a7d4266d15 | 2022-06-10T02:57:13.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ajsmith201 | null | ajsmith201/t5-large-finetuned-bias-2e10ce74 | 1 | null | transformers | 32,766 | Entry not found |
ajsmith201/t5-small-finetuned-bias-72bc782c | bab61c81605fe7d796593f31441fe237dce35747 | 2022-06-10T03:11:20.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ajsmith201 | null | ajsmith201/t5-small-finetuned-bias-72bc782c | 1 | null | transformers | 32,767 | Entry not found |
huggingtweets/ralee85 | 8df10ff33848a899102c79efa318a9e985f081d2 | 2022-06-10T06:27:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/ralee85 | 1 | null | transformers | 32,768 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/964497068424249345/Y6ce6atF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rob Lee</div>
<div style="text-align: center; font-size: 14px;">@ralee85</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rob Lee.
| Data | Rob Lee |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 22 |
| Short tweets | 1590 |
| Tweets kept | 1638 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/164xyalb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ralee85's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3pc7ca11) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3pc7ca11/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ralee85')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
BettyFei/t5-small-finetuned-xsum | 2fcd66897b118ef8ef89e0fc80bd598b383edcb7 | 2022-06-10T08:48:52.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | BettyFei | null | BettyFei/t5-small-finetuned-xsum | 1 | null | transformers | 32,769 | Entry not found |
FabianWillner/distilbert-base-uncased-finetuned-squad-finetuned-triviaqa | b06ecc62caf11fb21d0eb8d2c9244f3034472cc3 | 2022-06-10T11:54:41.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | FabianWillner | null | FabianWillner/distilbert-base-uncased-finetuned-squad-finetuned-triviaqa | 1 | null | transformers | 32,770 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad-finetuned-triviaqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-finetuned-triviaqa
This model is a fine-tuned version of [FabianWillner/distilbert-base-uncased-finetuned-squad](https://huggingface.co/FabianWillner/distilbert-base-uncased-finetuned-squad) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9722 | 1.0 | 11195 | 0.9665 |
| 0.7558 | 2.0 | 22390 | 0.9583 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
simecek/humandna_ELECTRA_1epoch | 71c5a1de61a3f8638f7071cad2b32e07b0038bd5 | 2022-06-10T09:49:01.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simecek | null | simecek/humandna_ELECTRA_1epoch | 1 | null | transformers | 32,771 | Entry not found |
stig/distilbert-base-uncased-finetuned | ceb218c0f9a55e75a29df521c8c6f4efe128ed2b | 2022-06-10T10:59:39.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | stig | null | stig/distilbert-base-uncased-finetuned | 1 | null | transformers | 32,772 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0255 | 1.0 | 2312 | 1.9202 |
| 1.7483 | 2.0 | 4624 | 1.8437 |
| 1.5733 | 3.0 | 6936 | 1.8627 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
becher/t5-small-finetuned-arxiv | 575d0872a8bbc5be0e08f0b3faf697361f4b5347 | 2022-06-10T12:28:48.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | becher | null | becher/t5-small-finetuned-arxiv | 1 | null | transformers | 32,773 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-arxiv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-arxiv
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1559
- Rouge1: 37.854
- Rouge2: 20.4934
- Rougel: 33.9992
- Rougelsum: 33.9943
- Gen Len: 15.847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:-------:|:---------:|:-------:|
| 2.3848 | 1.0 | 3564 | 2.1559 | 37.854 | 20.4934 | 33.9992 | 33.9943 | 15.847 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
daedalus2003/HouseBot | b0fee41b5fbf36567a72cd62a6a1995efcc71fbc | 2022-06-10T12:37:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | daedalus2003 | null | daedalus2003/HouseBot | 1 | null | transformers | 32,774 | ---
tags:
- conversational
---
# House MD DialoGPT Model |
income/bpr-base-msmarco-contriever | 222ce4846c85226087d2655a3ac7f52b76fd7979 | 2022-06-10T17:16:00.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | income | null | income/bpr-base-msmarco-contriever | 1 | null | sentence-transformers | 32,775 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 6653 with parameters:
```
{'batch_size': 75, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`bpr_loss.BPRLossFunction`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
huggingtweets/ninjasexparty | 48bd29477e7096a44db9dddbadb181f89c009da3 | 2022-06-10T19:56:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/ninjasexparty | 1 | null | transformers | 32,776 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1446572046679302144/jF9HS_Yd_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ninja Sex Party</div>
<div style="text-align: center; font-size: 14px;">@ninjasexparty</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ninja Sex Party.
| Data | Ninja Sex Party |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 631 |
| Short tweets | 439 |
| Tweets kept | 2180 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ik0ji2l/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ninjasexparty's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1jyhmzsa) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1jyhmzsa/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ninjasexparty')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
erickfm/t5-small-finetuned-bias-sweep-b7414781 | 8f796a0642745c9c65b263f2bc7cb995a6e8e1b9 | 2022-06-10T23:59:36.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-small-finetuned-bias-sweep-b7414781 | 1 | null | transformers | 32,777 | Entry not found |
erickfm/t5-small-finetuned-bias-sweep-f15c71f5 | f6d8f23068107f640dbde8a51b5ec42fa6b0f022 | 2022-06-11T00:01:48.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-small-finetuned-bias-sweep-f15c71f5 | 1 | null | transformers | 32,778 | Entry not found |
huggingtweets/froliki2108 | fbe1850a668d514850cfe88df9a4097e418fdee0 | 2022-06-11T00:04:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/froliki2108 | 1 | null | transformers | 32,779 | ---
language: en
thumbnail: http://www.huggingtweets.com/froliki2108/1654905851117/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1447692349493100549/1PV2c-PJ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Frolikiπππ</div>
<div style="text-align: center; font-size: 14px;">@froliki2108</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Frolikiπππ.
| Data | Frolikiπππ |
| --- | --- |
| Tweets downloaded | 2223 |
| Retweets | 1133 |
| Short tweets | 229 |
| Tweets kept | 861 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2tug3miv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @froliki2108's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3otsf5pj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3otsf5pj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/froliki2108')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/yomancuso | 75be91fd11ca24c3475f8c001504b977460db93d | 2022-06-11T01:08:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/yomancuso | 1 | null | transformers | 32,780 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1490538004607385602/laSBwC6u_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Davey Wavey</div>
<div style="text-align: center; font-size: 14px;">@yomancuso</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Davey Wavey.
| Data | Davey Wavey |
| --- | --- |
| Tweets downloaded | 3176 |
| Retweets | 1207 |
| Short tweets | 485 |
| Tweets kept | 1484 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2i0ci708/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yomancuso's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3mexojoq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3mexojoq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/yomancuso')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
gary109/ai-light-dance_singing_ft_pretrain_wav2vec2-large-lv60 | c6118014f47123277bf2ce91bea57de1bfe78ce6 | 2022-06-14T16:00:09.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_singing_ft_pretrain_wav2vec2-large-lv60 | 1 | null | transformers | 32,781 | ---
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing_ft_pretrain_wav2vec2-large-lv60
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing_ft_pretrain_wav2vec2-large-lv60
This model is a fine-tuned version of [gary109/ai-light-dance_pretrain_wav2vec2-large-lv60](https://huggingface.co/gary109/ai-light-dance_pretrain_wav2vec2-large-lv60) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4961
- Wer: 0.9206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.6096 | 1.0 | 552 | 1.7650 | 1.0053 |
| 1.6294 | 2.0 | 1104 | 1.6735 | 0.9591 |
| 1.5509 | 3.0 | 1656 | 1.6170 | 0.9852 |
| 1.5175 | 4.0 | 2208 | 1.6312 | 0.9626 |
| 1.5267 | 5.0 | 2760 | 1.5032 | 0.9249 |
| 1.4055 | 6.0 | 3312 | 1.6107 | 0.9438 |
| 1.3267 | 7.0 | 3864 | 1.5386 | 0.9378 |
| 1.312 | 8.0 | 4416 | 1.4961 | 0.9206 |
| 1.3245 | 9.0 | 4968 | 1.5158 | 0.9182 |
| 1.2885 | 10.0 | 5520 | 1.5296 | 0.9230 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
erickfm/t5-base-finetuned-bias-sweep-41313d89 | c54fb3ac9a22597bc20475b8b7eca68cc44dc6ec | 2022-06-11T05:22:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-base-finetuned-bias-sweep-41313d89 | 1 | null | transformers | 32,782 | Entry not found |
Jawaher/LIAR-fake-news-roberta-base | cb10690d29948434d3aae4c3926e987595adddb9 | 2022-06-11T11:12:24.000Z | [
"pytorch",
"tf",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Jawaher | null | Jawaher/LIAR-fake-news-roberta-base | 1 | null | transformers | 32,783 | A pre-trained Roberta masked language model (MLM) trained on around 12K fake news dataset called LIAR. The perplexity of the original pre-trained Roberta model on the dataset is 5.957 and the perplexity of the adapted model is 3.918. |
erickfm/t5-base-finetuned-bias-sweep-4ddf2050 | a2cf1d17b1183fa90733592ed7efa5b88757fe68 | 2022-06-11T09:12:28.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-base-finetuned-bias-sweep-4ddf2050 | 1 | null | transformers | 32,784 | Entry not found |
aware-ai/robust-wav2vec2-xls-r-1b-german | 589c8e3179b472f44d7919c96930e1d4c38522f9 | 2022-06-12T12:34:59.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | aware-ai | null | aware-ai/robust-wav2vec2-xls-r-1b-german | 1 | null | transformers | 32,785 | Entry not found |
shivarama23/swin-tiny-patch4-window7-224-finetuned-image_quality | 2d14522cddcdb2e1b204a4ba59bc41207df01118 | 2022-06-11T11:54:49.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | shivarama23 | null | shivarama23/swin-tiny-patch4-window7-224-finetuned-image_quality | 1 | null | transformers | 32,786 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-image_quality
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9090909090909091
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-image_quality
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5242
- Accuracy: 0.9091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.6762 | 0.6364 |
| No log | 2.0 | 2 | 0.6309 | 0.7273 |
| No log | 3.0 | 3 | 0.6095 | 0.6364 |
| No log | 4.0 | 4 | 0.5775 | 0.6364 |
| No log | 5.0 | 5 | 0.5443 | 0.8182 |
| No log | 6.0 | 6 | 0.5242 | 0.9091 |
| No log | 7.0 | 7 | 0.5149 | 0.8182 |
| No log | 8.0 | 8 | 0.5094 | 0.8182 |
| No log | 9.0 | 9 | 0.5038 | 0.8182 |
| 0.4095 | 10.0 | 10 | 0.4992 | 0.8182 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
lllFaNToMlll/wac2vec-lllfantomlll | 3cc8e8a445d71f79568198e28961ded0ecd99b17 | 2022-06-11T18:07:44.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lllFaNToMlll | null | lllFaNToMlll/wac2vec-lllfantomlll | 1 | null | transformers | 32,787 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wac2vec-lllfantomlll
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wac2vec-lllfantomlll
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5560
- Wer: 0.3417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5768 | 1.0 | 500 | 2.0283 | 1.0238 |
| 0.9219 | 2.01 | 1000 | 0.5103 | 0.5022 |
| 0.4497 | 3.01 | 1500 | 0.4746 | 0.4669 |
| 0.3163 | 4.02 | 2000 | 0.4144 | 0.4229 |
| 0.2374 | 5.02 | 2500 | 0.4186 | 0.4161 |
| 0.2033 | 6.02 | 3000 | 0.4115 | 0.3975 |
| 0.1603 | 7.03 | 3500 | 0.4424 | 0.3817 |
| 0.1455 | 8.03 | 4000 | 0.4151 | 0.3918 |
| 0.1276 | 9.04 | 4500 | 0.4940 | 0.3798 |
| 0.108 | 10.04 | 5000 | 0.4580 | 0.3688 |
| 0.1053 | 11.04 | 5500 | 0.4243 | 0.3700 |
| 0.0929 | 12.05 | 6000 | 0.4999 | 0.3727 |
| 0.0896 | 13.05 | 6500 | 0.4991 | 0.3624 |
| 0.0748 | 14.06 | 7000 | 0.4924 | 0.3602 |
| 0.0681 | 15.06 | 7500 | 0.4908 | 0.3544 |
| 0.0619 | 16.06 | 8000 | 0.5021 | 0.3559 |
| 0.0569 | 17.07 | 8500 | 0.5448 | 0.3518 |
| 0.0549 | 18.07 | 9000 | 0.4919 | 0.3508 |
| 0.0478 | 19.08 | 9500 | 0.4704 | 0.3513 |
| 0.0437 | 20.08 | 10000 | 0.5058 | 0.3555 |
| 0.0421 | 21.08 | 10500 | 0.5127 | 0.3489 |
| 0.0362 | 22.09 | 11000 | 0.5439 | 0.3527 |
| 0.0322 | 23.09 | 11500 | 0.5418 | 0.3469 |
| 0.0327 | 24.1 | 12000 | 0.5298 | 0.3422 |
| 0.0292 | 25.1 | 12500 | 0.5511 | 0.3426 |
| 0.0246 | 26.1 | 13000 | 0.5349 | 0.3472 |
| 0.0251 | 27.11 | 13500 | 0.5646 | 0.3391 |
| 0.0214 | 28.11 | 14000 | 0.5821 | 0.3424 |
| 0.0217 | 29.12 | 14500 | 0.5560 | 0.3417 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
florver/modelo_NLI_kvd_1_1epoch | 4d2f596399b243e43c9204383e37816366224204 | 2022-06-11T11:59:18.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | florver | null | florver/modelo_NLI_kvd_1_1epoch | 1 | null | transformers | 32,788 | Entry not found |
huggingtweets/adrianramy | 96eecd8c5d40b9b1478206db75e3c42d3e846f31 | 2022-06-11T12:12:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/adrianramy | 1 | null | transformers | 32,789 | ---
language: en
thumbnail: http://www.huggingtweets.com/adrianramy/1654949574810/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1192394634305134593/kWwF0YSv_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Adri</div>
<div style="text-align: center; font-size: 14px;">@adrianramy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Adri.
| Data | Adri |
| --- | --- |
| Tweets downloaded | 3050 |
| Retweets | 1585 |
| Short tweets | 275 |
| Tweets kept | 1190 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/30dqbz5d/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @adrianramy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/16tp54yl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/16tp54yl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/adrianramy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Akshat/xlm-roberta-base-finetuned-panx-de | 98e1fc14b50af5a161d74a34ce754a5e0c95875c | 2022-06-11T13:35:25.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | Akshat | null | Akshat/xlm-roberta-base-finetuned-panx-de | 1 | null | transformers | 32,790 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8611443210930829
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1405
- F1: 0.8611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2542 | 1.0 | 787 | 0.1788 | 0.8083 |
| 0.1307 | 2.0 | 1574 | 0.1371 | 0.8488 |
| 0.0784 | 3.0 | 2361 | 0.1405 | 0.8611 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
gary109/ai-light-dance_singing_pretrain_wav2vec2-large-lv60-5gram | ebaf7589260e33467575a3a1d6b08aba9733db0c | 2022-06-11T12:35:15.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_singing_pretrain_wav2vec2-large-lv60-5gram | 1 | null | transformers | 32,791 | Entry not found |
finiteautomata/pepe-5k_nodiff | 27a1d2f9b0243a2492e1f32806d881fe32ece0c9 | 2022-06-11T15:17:59.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | finiteautomata | null | finiteautomata/pepe-5k_nodiff | 1 | null | transformers | 32,792 | Entry not found |
florver/modelo_NLI_kvd_2_8000 | 86cd359eb9924f36673ab7c35045c5b532c705b4 | 2022-06-11T17:35:50.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | florver | null | florver/modelo_NLI_kvd_2_8000 | 1 | null | transformers | 32,793 | Entry not found |
abdoutony207/m2m100_418M-evaluated-en-to-ar-2000instancesopus-leaningRate2e-05-batchSize16-20epoch-1 | 075155baf7d1825b9408e94c5aab18bfc4d71e93 | 2022-06-11T16:26:19.000Z | [
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"dataset:opus100",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | abdoutony207 | null | abdoutony207/m2m100_418M-evaluated-en-to-ar-2000instancesopus-leaningRate2e-05-batchSize16-20epoch-1 | 1 | null | transformers | 32,794 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- opus100
metrics:
- bleu
model-index:
- name: m2m100_418M-evaluated-en-to-ar-2000instancesopus-leaningRate2e-05-batchSize16-20epoch-1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus100
type: opus100
args: ar-en
metrics:
- name: Bleu
type: bleu
value: 13.1835
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-evaluated-en-to-ar-2000instancesopus-leaningRate2e-05-batchSize16-20epoch-1
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the opus100 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3640
- Bleu: 13.1835
- Meteor: 0.1189
- Gen Len: 17.72
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|
| 6.1776 | 1.0 | 100 | 3.8904 | 10.5866 | 0.0995 | 16.64 |
| 2.4531 | 2.0 | 200 | 1.0928 | 12.3452 | 0.1108 | 17.0575 |
| 0.512 | 3.0 | 300 | 0.3625 | 10.5224 | 0.0982 | 17.2575 |
| 0.1924 | 4.0 | 400 | 0.3342 | 12.4242 | 0.1098 | 16.6325 |
| 0.1227 | 5.0 | 500 | 0.3403 | 13.0526 | 0.1185 | 17.3475 |
| 0.0889 | 6.0 | 600 | 0.3481 | 13.1323 | 0.1133 | 17.815 |
| 0.0651 | 7.0 | 700 | 0.3601 | 12.6684 | 0.1133 | 17.3525 |
| 0.0533 | 8.0 | 800 | 0.3640 | 13.1835 | 0.1189 | 17.72 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
aprischa/bart-large-cnn-aprischa | 3ac58b6a029f4558cf6805f613dd028cd3ede75b | 2022-06-11T17:21:57.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | aprischa | null | aprischa/bart-large-cnn-aprischa | 1 | null | transformers | 32,795 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-aprischa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-aprischa
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3589
- Rouge1: 66.7098
- Rouge2: 57.7992
- Rougel: 63.2231
- Rougelsum: 65.9009
- Gen Len: 141.198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 0.369 | 1.0 | 5403 | 0.3835 | 66.0604 | 56.9948 | 62.4967 | 65.265 | 141.1126 |
| 0.2985 | 2.0 | 10806 | 0.3589 | 66.7098 | 57.7992 | 63.2231 | 65.9009 | 141.198 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
aprischa/bart-large-cnn-aprischa2 | 774332492a49b5c42047529f4b7dadb4b7707dcd | 2022-06-11T23:27:38.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | aprischa | null | aprischa/bart-large-cnn-aprischa2 | 1 | null | transformers | 32,796 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-aprischa2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-aprischa2
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3425
- Rouge1: 65.7088
- Rouge2: 56.6701
- Rougel: 62.1926
- Rougelsum: 64.7727
- Gen Len: 140.8469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 0.3772 | 1.0 | 5403 | 0.3586 | 65.7702 | 56.7968 | 62.264 | 64.8605 | 140.268 |
| 0.316 | 2.0 | 10806 | 0.3421 | 64.8238 | 55.8837 | 61.3245 | 63.8894 | 140.7472 |
| 0.2397 | 3.0 | 16209 | 0.3425 | 65.7088 | 56.6701 | 62.1926 | 64.7727 | 140.8469 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
abdoutony207/m2m100_418M-evaluated-en-to-ar-4000instancesopus-leaningRate2e-05-batchSize8-11epoch-3 | b7fca3cb543639c16c368f80e8d2e8747ff01067 | 2022-06-11T19:20:41.000Z | [
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | abdoutony207 | null | abdoutony207/m2m100_418M-evaluated-en-to-ar-4000instancesopus-leaningRate2e-05-batchSize8-11epoch-3 | 1 | null | transformers | 32,797 | Entry not found |
huggingtweets/mdoukmas | 32e81d430e16ee21ed1cee6ee6aab89d886fa060 | 2022-06-11T19:35:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/mdoukmas | 1 | null | transformers | 32,798 | ---
language: en
thumbnail: http://www.huggingtweets.com/mdoukmas/1654976150184/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1098660288193269762/n5v9daol_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Maya Dukmasova</div>
<div style="text-align: center; font-size: 14px;">@mdoukmas</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Maya Dukmasova.
| Data | Maya Dukmasova |
| --- | --- |
| Tweets downloaded | 3241 |
| Retweets | 896 |
| Short tweets | 158 |
| Tweets kept | 2187 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2jwhv7l5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mdoukmas's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/25v3pmsy) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/25v3pmsy/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mdoukmas')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
meghazisofiane/opus-mt-en-ar-evaluated-en-to-ar-2000instancesopus-leaningRate2e-05-batchSize8-11epoch-3 | 3f957df6afd40bd4e30555f6e00c8c104d9dc8a7 | 2022-06-11T21:27:25.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:opus100",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | meghazisofiane | null | meghazisofiane/opus-mt-en-ar-evaluated-en-to-ar-2000instancesopus-leaningRate2e-05-batchSize8-11epoch-3 | 1 | null | transformers | 32,799 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus100
metrics:
- bleu
model-index:
- name: opus-mt-en-ar-evaluated-en-to-ar-2000instancesopus-leaningRate2e-05-batchSize8-11epoch-3
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus100
type: opus100
args: ar-en
metrics:
- name: Bleu
type: bleu
value: 26.2629
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ar-evaluated-en-to-ar-2000instancesopus-leaningRate2e-05-batchSize8-11epoch-3
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the opus100 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1959
- Bleu: 26.2629
- Meteor: 0.1703
- Gen Len: 11.0925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|
| 1.0519 | 0.5 | 100 | 0.1985 | 27.3525 | 0.1815 | 11.0725 |
| 0.1947 | 1.0 | 200 | 0.1902 | 26.9728 | 0.1789 | 10.82 |
| 0.1489 | 1.5 | 300 | 0.1910 | 27.7003 | 0.1811 | 10.975 |
| 0.1665 | 2.0 | 400 | 0.1905 | 26.3739 | 0.1772 | 11.1075 |
| 0.1321 | 2.5 | 500 | 0.1926 | 26.752 | 0.1772 | 10.975 |
| 0.1271 | 3.0 | 600 | 0.1927 | 27.3663 | 0.1751 | 10.9725 |
| 0.1105 | 3.5 | 700 | 0.1952 | 27.134 | 0.1738 | 10.9975 |
| 0.109 | 4.0 | 800 | 0.1959 | 26.2629 | 0.1703 | 11.0925 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.