modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
nlokam99/ada_sample_2
|
nlokam99
| 2022-06-12T17:40:42Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-12T17:38:56Z |
---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
|
obokkkk/kc-bert_finetuned_unsmile
|
obokkkk
| 2022-06-12T17:22:32Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-12T14:39:40Z |
---
tags:
- generated_from_trainer
model-index:
- name: kc-bert_finetuned_unsmile
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kc-bert_finetuned_unsmile
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1326
- Lrap: 0.8753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Lrap |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 235 | 0.1458 | 0.8612 |
| No log | 2.0 | 470 | 0.1280 | 0.8738 |
| 0.1685 | 3.0 | 705 | 0.1257 | 0.8791 |
| 0.1685 | 4.0 | 940 | 0.1281 | 0.8777 |
| 0.0774 | 5.0 | 1175 | 0.1326 | 0.8753 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.12.1
|
huggingtweets/warriors
|
huggingtweets
| 2022-06-12T15:38:14Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-12T15:36:47Z |
---
language: en
thumbnail: http://www.huggingtweets.com/warriors/1655048290751/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1533845175725719553/yvzbj8iG_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Golden State Warriors</div>
<div style="text-align: center; font-size: 14px;">@warriors</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Golden State Warriors.
| Data | Golden State Warriors |
| --- | --- |
| Tweets downloaded | 3251 |
| Retweets | 261 |
| Short tweets | 563 |
| Tweets kept | 2427 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/36p28s9n/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @warriors's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/17arirrx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/17arirrx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/warriors')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Doohae/msmarco-passage-encoder-v0
|
Doohae
| 2022-06-12T15:00:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-12T14:43:09Z |
Passage Encoder trained on Tevatron small sample dataset(3epochs)
|
kravchenko/uk-mt5-small
|
kravchenko
| 2022-06-12T14:56:53Z | 21 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"uk",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-12T12:47:27Z |
---
language:
- uk
- en
tags:
- mt5
---
The aim is to compress the mT5-small model to leave only the Ukrainian language and some basic English.
Reproduced the similar result (but with another language) from [this](https://towardsdatascience.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90) medium article.
Results:
- 300M params -> 75M params (75%)
- 250K tokens -> 8900 tokens
- 1.1GB size model -> 0.3GB size model
|
Doohae/msmarco-query-encoder-v0
|
Doohae
| 2022-06-12T14:52:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-12T14:42:45Z |
Query Encoder trained on Tevatron small sample dataset(3epochs)
|
nestoralvaro/mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t55_403.csv__google_mt5_base
|
nestoralvaro
| 2022-06-12T12:25:16Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-12T10:01:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t55_403.csv__google_mt5_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t55_403.csv__google_mt5_base
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.9712
- Rouge2: 0.1329
- Rougel: 0.9638
- Rougelsum: 0.9675
- Gen Len: 6.4489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 36479 | nan | 0.9712 | 0.1329 | 0.9638 | 0.9675 | 6.4489 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ilhami/Tr_En-MbartFinetune
|
ilhami
| 2022-06-12T12:01:16Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"translation",
"tr",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-06-12T10:02:23Z |
---
language:
- tr
- en
tags:
- translation
license: apache-2.0
datasets:
- Parallel Corpora for Turkish-English Academic Translations
metrics:
- bleu
- sacrebleu
---
## Model Details
- **Developed by:** İlhami SEL
- **Model type:** Mbart Finetune Machine Translation
- **Language:** Turkish - English
- **Resources for more information:** Sel, İ. , Üzen, H. & Hanbay, D. (2021). Creating a Parallel Corpora for Turkish-English Academic Translations . Computer Science , 5th International Artificial Intelligence and Data Processing symposium , 335-340 . DOI: 10.53070/bbd.990959
```python
checkpoint = "ilhami/Tr_En-MbartFinetune"
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint).to("cuda")
tokenizer.src_lang = "tr_TR"
tr= ["Sohbet robotları son yıllarda yaygın bir şekilde kullanılmaya başlanmıştır. ",
"İnsanları taklit eden ve daha iyi müşteri memnuniyeti sağlayan sohbet robotları en gelişkin doğal dil işleme tekniklerine ihtiyaç duymaktadır. ",
"Bu çalışma sohbet robotu konuşmalarının niyet tahminini geliştirmeye odaklanmıştır." ,
"Kelime gösterimi için TF-IDF, Doc2vec ve BERT gibi geleneksel ve gelişmiş doğal dil işleme yöntemleri, çoklu sınıf ve çoklu etiket tahmini için ise lojistik regresyon, rastgele orman ve yapay sinir ağları kullanılmıştır." ,
"Sohbet robotu konuşma veri kümeleri, sinema bileti rezervasyonu, restoran rezervasyonu ve taksi çağırma olmak üzere üç farklı alandan alınmıştır. ",
"Bu çalışmanın sonunda, BERT ve BERT ile TF-IDF birleşimi modellerin diğer kombinasyonlardan daha iyi sonuç verdiği görülmüştür. ",
"BERT gibi ön eğitimli modellerden faydalanmanın daha iyi bağlamsal anlama sağladığı ortaya çıkmıştır. ",
"TF-IDF yerleştirmeleri, BERT gösterimi ile birleştirilerek niyet kategorisi tahmininin iyileştirilmesi amaçlanmıştır."]
encoded_tr = tokenizer(tr, return_tensors="pt" ,padding=True , truncation=True).to("cuda")
generated_tokens = model.generate(**encoded_tr, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"])
en = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
```
|
mgfrantz/dql-SpaceInvadersNoFrameskip-v4
|
mgfrantz
| 2022-06-12T11:13:41Z | 7 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-12T11:12:58Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 1003.50 +/- 404.22
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mgfrantz -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mgfrantz
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
eunbeee/ainize-kobart-news-eb-finetuned-meetings-papers
|
eunbeee
| 2022-06-12T11:02:29Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-12T08:37:23Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: ainize-kobart-news-eb-finetuned-meetings-papers
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ainize-kobart-news-eb-finetuned-meetings-papers
This model is a fine-tuned version of [ainize/kobart-news](https://huggingface.co/ainize/kobart-news) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3289
- Rouge1: 17.3988
- Rouge2: 7.0454
- Rougel: 17.3877
- Rougelsum: 17.42
- Gen Len: 19.9473
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 0.1402 | 1.0 | 7588 | 0.2930 | 17.1421 | 7.0141 | 17.1211 | 17.1473 | 19.9374 |
| 0.0997 | 2.0 | 15176 | 0.2842 | 17.1692 | 6.8824 | 17.1557 | 17.1985 | 19.9435 |
| 0.0692 | 3.0 | 22764 | 0.3052 | 17.4241 | 7.1083 | 17.4028 | 17.4472 | 19.9453 |
| 0.0556 | 4.0 | 30352 | 0.3289 | 17.3988 | 7.0454 | 17.3877 | 17.42 | 19.9473 |
| 0.0533 | 5.0 | 37940 | 0.3289 | 17.3988 | 7.0454 | 17.3877 | 17.42 | 19.9473 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/manfightdragon
|
huggingtweets
| 2022-06-12T10:26:35Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-12T10:23:38Z |
---
language: en
thumbnail: http://www.huggingtweets.com/manfightdragon/1655029573001/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1184073162520031232/V6DOEeLp_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lance McDonald</div>
<div style="text-align: center; font-size: 14px;">@manfightdragon</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lance McDonald.
| Data | Lance McDonald |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 209 |
| Short tweets | 214 |
| Tweets kept | 2826 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3pc794z5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @manfightdragon's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2t8940p5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2t8940p5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/manfightdragon')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
z-uo/vits-commonvoice9.0
|
z-uo
| 2022-06-12T09:46:23Z | 1 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"text-to-speech",
"it",
"dataset:mozilla-foundation/common_voice_9_0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2022-06-12T07:07:07Z |
---
tags:
- text-to-speech
language:
- it
model-index:
- name: vits-commonvoice9.0
results: []
datasets:
- mozilla-foundation/common_voice_9_0
---
# Common Voice it Vits
Train on [Mozzila Common voice](https://commonvoice.mozilla.org/) v9.0 it with [Coqui VITS](https://github.com/coqui-ai/TTS)
```
# Coqui tts sha commit coquitts: 0cf3265a4686d7e856bd472cdaf1572d61cab2b8
PYTORCH_CUDA_ALLOC_CONF="max_split_size_mb:25" CUDA_VISIBLE_DEVICES=1 python recipes/common_voice/vits/train_vits.py
CUDA_VISIBLE_DEVICES=0 tts-server --model_path "/run/media/opensuse/Barracuda/Models/TTS_new/trained_common_voice/vits_vctk-June-05-2022_03+45PM-0cf3265a/best_model.pth" --config_path "/run/media/opensuse/Barracuda/Models/TTS_new/trained_common_voice/vits_vctk-June-05-2022_03+45PM-0cf3265a/config.json"
```
|
huggingtweets/bosstjanz
|
huggingtweets
| 2022-06-12T09:27:34Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-12T09:26:54Z |
---
language: en
thumbnail: http://www.huggingtweets.com/bosstjanz/1655026050127/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1342130927737176064/SiNG_CxQ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Zrimškow</div>
<div style="text-align: center; font-size: 14px;">@bosstjanz</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Zrimškow.
| Data | Zrimškow |
| --- | --- |
| Tweets downloaded | 3225 |
| Retweets | 368 |
| Short tweets | 279 |
| Tweets kept | 2578 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/23nemiqj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bosstjanz's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2pjrymzt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2pjrymzt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bosstjanz')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ironbar/dqn-SpaceInvadersNoFrameskip-v4-1M-steps
|
ironbar
| 2022-06-12T08:16:08Z | 11 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-12T08:15:30Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 629.50 +/- 140.06
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ironbar -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ironbar
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
spuun/kekbot-mini
|
spuun
| 2022-06-12T05:53:59Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-12T03:40:33Z |
---
language:
- en
metrics:
- accuracy
co2_eq_emissions:
emissions: "10"
source: "mlco2.github.io"
training_type: "fine-tuning"
geographical_location: "West Java, Indonesia"
hardware_used: "1 T4"
license: cc-by-nc-sa-4.0
widget:
- text: 'You: "Hey kekbot! Whats up?"\nKekbot: "'
example_title: "Asking what's up"
- text: 'You: "Hey kekbot! How r u?"\nKekbot: "'
example_title: "Asking how he is"
---
> THIS MODEL IS INTENDED FOR RESEARCH PURPOSES ONLY
# Kekbot Mini
Based on a `distilgpt2` model, fine-tuned to a select subset (65k<= messages) of Art Union's general-chat channel chat history.
### Limits and biases
As this is trained on chat history, it is possible that discriminatory or even offensive materials to be outputted.
Author holds his ground on the fact that ML models are mere statistical representation of the dataset used to train it,
and that due to the nature of the dataset it is practically impossible to be certain of
the degree of "cleanliness" that the data contained within holds.
Author can confirm, however, that from heuristical testing that the model was not found to be offensive
to the author himself, hopefully this opinion stays true for everyone in the audience.
|
ahmeddbahaa/arabert2arabert-finetuned-ar-wikilingua
|
ahmeddbahaa
| 2022-06-12T05:51:47Z | 21 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"summarization",
"ar",
"arabert",
"arabert2arabert",
"Abstractive Summarization",
"generated_from_trainer",
"dataset:wiki_lingua",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-06-12T01:03:07Z |
---
tags:
- summarization
- ar
- encoder-decoder
- arabert
- arabert2arabert
- Abstractive Summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: arabert2arabert-finetuned-ar-wikilingua
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabert2arabert-finetuned-ar-wikilingua
This model is a fine-tuned version of [](https://huggingface.co/) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6877
- Rouge-1: 13.2
- Rouge-2: 3.43
- Rouge-l: 12.45
- Gen Len: 20.0
- Bertscore: 64.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 6.7667 | 1.0 | 156 | 5.3846 | 3.36 | 0.56 | 3.27 | 20.0 | 60.6 |
| 5.257 | 2.0 | 312 | 5.0424 | 5.44 | 0.88 | 5.35 | 20.0 | 60.56 |
| 4.743 | 3.0 | 468 | 4.8294 | 9.21 | 1.8 | 8.93 | 20.0 | 62.91 |
| 4.3832 | 4.0 | 624 | 4.7240 | 9.88 | 2.19 | 9.6 | 20.0 | 62.65 |
| 4.1166 | 5.0 | 780 | 4.6861 | 11.61 | 2.86 | 11.13 | 20.0 | 63.71 |
| 3.91 | 6.0 | 936 | 4.6692 | 12.27 | 3.11 | 11.76 | 20.0 | 64.07 |
| 3.7569 | 7.0 | 1092 | 4.6805 | 12.93 | 3.38 | 12.28 | 20.0 | 64.61 |
| 3.6454 | 8.0 | 1248 | 4.6877 | 13.2 | 3.43 | 12.45 | 20.0 | 64.88 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
bguan/SpaceInvadersNoFrameskip-v4-2Msteps
|
bguan
| 2022-06-12T05:15:59Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-12T05:15:25Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 550.00 +/- 150.17
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bguan -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga bguan
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 400000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 2000000),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
huggingtweets/tayplaysgaymes
|
huggingtweets
| 2022-06-12T03:56:41Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-12T03:55:39Z |
---
language: en
thumbnail: http://www.huggingtweets.com/tayplaysgaymes/1655006196516/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1144053838459969536/lv3yBmoX_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tay</div>
<div style="text-align: center; font-size: 14px;">@tayplaysgaymes</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tay.
| Data | Tay |
| --- | --- |
| Tweets downloaded | 3212 |
| Retweets | 693 |
| Short tweets | 367 |
| Tweets kept | 2152 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1hmextiq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tayplaysgaymes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3r0cse8x) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3r0cse8x/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tayplaysgaymes')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
meghazisofiane/opus-mt-en-ar-evaluated-en-to-ar-2000instances-un_multi-leaningRate2e-05-batchSize8-11-action-1
|
meghazisofiane
| 2022-06-12T00:44:37Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:un_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-12T00:34:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- un_multi
metrics:
- bleu
model-index:
- name: opus-mt-en-ar-evaluated-en-to-ar-2000instances-un_multi-leaningRate2e-05-batchSize8-11-action-1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: un_multi
type: un_multi
args: ar-en
metrics:
- name: Bleu
type: bleu
value: 53.0137
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ar-evaluated-en-to-ar-2000instances-un_multi-leaningRate2e-05-batchSize8-11-action-1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the un_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1873
- Bleu: 53.0137
- Meteor: 0.5005
- Gen Len: 25.845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|
| 0.6585 | 0.5 | 100 | 0.2085 | 52.5874 | 0.4969 | 25.485 |
| 0.1802 | 1.0 | 200 | 0.1788 | 52.9434 | 0.4982 | 25.1725 |
| 0.1501 | 1.5 | 300 | 0.1683 | 53.6994 | 0.5033 | 25.625 |
| 0.1454 | 2.0 | 400 | 0.1706 | 53.3946 | 0.5005 | 25.6675 |
| 0.1193 | 2.5 | 500 | 0.1774 | 53.2011 | 0.4982 | 25.58 |
| 0.1194 | 3.0 | 600 | 0.1741 | 53.8651 | 0.5026 | 25.5775 |
| 0.1002 | 3.5 | 700 | 0.1878 | 53.1332 | 0.5005 | 25.8975 |
| 0.0979 | 4.0 | 800 | 0.1881 | 52.5989 | 0.4974 | 25.485 |
| 0.0807 | 4.5 | 900 | 0.1873 | 53.0137 | 0.5005 | 25.845 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingtweets/laserboat999
|
huggingtweets
| 2022-06-11T23:53:52Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-11T23:49:07Z |
---
language: en
thumbnail: http://www.huggingtweets.com/laserboat999/1654991516445/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1500274766195793921/bA4siut7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">donald boat</div>
<div style="text-align: center; font-size: 14px;">@laserboat999</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from donald boat.
| Data | donald boat |
| --- | --- |
| Tweets downloaded | 3233 |
| Retweets | 75 |
| Short tweets | 516 |
| Tweets kept | 2642 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/38v40fpf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @laserboat999's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/pk1xum9h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/pk1xum9h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/laserboat999')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
745H1N/LunarLander-v2-DQN-optuna
|
745H1N
| 2022-06-11T23:36:51Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-11T23:36:25Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -140.18 +/- 41.67
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aprischa/bart-large-cnn-aprischa2
|
aprischa
| 2022-06-11T23:27:38Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-11T17:40:18Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-aprischa2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-aprischa2
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3425
- Rouge1: 65.7088
- Rouge2: 56.6701
- Rougel: 62.1926
- Rougelsum: 64.7727
- Gen Len: 140.8469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 0.3772 | 1.0 | 5403 | 0.3586 | 65.7702 | 56.7968 | 62.264 | 64.8605 | 140.268 |
| 0.316 | 2.0 | 10806 | 0.3421 | 64.8238 | 55.8837 | 61.3245 | 63.8894 | 140.7472 |
| 0.2397 | 3.0 | 16209 | 0.3425 | 65.7088 | 56.6701 | 62.1926 | 64.7727 | 140.8469 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
tjscollins/q-Taxi-v3
|
tjscollins
| 2022-06-11T21:37:49Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-11T21:00:50Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 12.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="tjscollins/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
meghazisofiane/opus-mt-en-ar-evaluated-en-to-ar-2000instancesopus-leaningRate2e-05-batchSize8-11epoch-3
|
meghazisofiane
| 2022-06-11T21:27:25Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:opus100",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-11T19:41:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus100
metrics:
- bleu
model-index:
- name: opus-mt-en-ar-evaluated-en-to-ar-2000instancesopus-leaningRate2e-05-batchSize8-11epoch-3
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus100
type: opus100
args: ar-en
metrics:
- name: Bleu
type: bleu
value: 26.2629
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ar-evaluated-en-to-ar-2000instancesopus-leaningRate2e-05-batchSize8-11epoch-3
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the opus100 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1959
- Bleu: 26.2629
- Meteor: 0.1703
- Gen Len: 11.0925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|
| 1.0519 | 0.5 | 100 | 0.1985 | 27.3525 | 0.1815 | 11.0725 |
| 0.1947 | 1.0 | 200 | 0.1902 | 26.9728 | 0.1789 | 10.82 |
| 0.1489 | 1.5 | 300 | 0.1910 | 27.7003 | 0.1811 | 10.975 |
| 0.1665 | 2.0 | 400 | 0.1905 | 26.3739 | 0.1772 | 11.1075 |
| 0.1321 | 2.5 | 500 | 0.1926 | 26.752 | 0.1772 | 10.975 |
| 0.1271 | 3.0 | 600 | 0.1927 | 27.3663 | 0.1751 | 10.9725 |
| 0.1105 | 3.5 | 700 | 0.1952 | 27.134 | 0.1738 | 10.9975 |
| 0.109 | 4.0 | 800 | 0.1959 | 26.2629 | 0.1703 | 11.0925 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
lindeberg/distilbert-base-uncased-finetuned-cola
|
lindeberg
| 2022-06-11T21:10:06Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-11T18:50:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.4496664370323995
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4949
- Matthews Correlation: 0.4497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5231 | 1.0 | 535 | 0.4949 | 0.4497 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
tjscollins/q-FrozenLake-v1-4x4-noSlippery
|
tjscollins
| 2022-06-11T20:25:47Z | 0 | 1 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-11T20:24:19Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 0.78 +/- 0.41
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="tjscollins/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
huggingtweets/elonmusk-rshowerthoughts-stephenking
|
huggingtweets
| 2022-06-11T20:15:51Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-11T20:04:06Z |
---
language: en
thumbnail: http://www.huggingtweets.com/elonmusk-rshowerthoughts-stephenking/1654978546952/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529956155937759233/Nyn1HZWF_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/378800000836981162/b683f7509ec792c3e481ead332940cdc_400x400.jpeg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/641699738224455680/L_ji6ClT_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Stephen King & Showerthoughts</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-rshowerthoughts-stephenking</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Stephen King & Showerthoughts.
| Data | Elon Musk | Stephen King | Showerthoughts |
| --- | --- | --- | --- |
| Tweets downloaded | 3200 | 3230 | 3200 |
| Retweets | 147 | 780 | 0 |
| Short tweets | 954 | 202 | 0 |
| Tweets kept | 2099 | 2248 | 3200 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1fvudd5c/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-rshowerthoughts-stephenking's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/39f9xftz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/39f9xftz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-rshowerthoughts-stephenking')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
JClementC/test
|
JClementC
| 2022-06-11T19:58:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-11T19:19:48Z |
git lfs install
git clone https://github.com/nneonneo/2048-ai.git
|
Galeros/dqn-mountaincar-v0-zoo-mimick
|
Galeros
| 2022-06-11T19:55:08Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-11T19:55:00Z |
---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -104.90 +/- 6.80
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
---
# **DQN** Agent playing **MountainCar-v0**
This is a trained model of a **DQN** agent playing **MountainCar-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
huggingtweets/conanobrien-mikemancini-wendymolyneux
|
huggingtweets
| 2022-06-11T19:50:54Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-11T19:46:43Z |
---
language: en
thumbnail: http://www.huggingtweets.com/conanobrien-mikemancini-wendymolyneux/1654977049172/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1271404115042676736/PAIbmN-p_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/730612231021322240/Rl0_QYhL_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1044085580651528193/DR7QvrwG_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">mike mancini & Conan O'Brien & Wendy Molyneux</div>
<div style="text-align: center; font-size: 14px;">@conanobrien-mikemancini-wendymolyneux</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from mike mancini & Conan O'Brien & Wendy Molyneux.
| Data | mike mancini | Conan O'Brien | Wendy Molyneux |
| --- | --- | --- | --- |
| Tweets downloaded | 3150 | 3250 | 836 |
| Retweets | 286 | 40 | 251 |
| Short tweets | 290 | 24 | 69 |
| Tweets kept | 2574 | 3186 | 516 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/25wtfzk4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @conanobrien-mikemancini-wendymolyneux's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1hjizcue) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1hjizcue/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/conanobrien-mikemancini-wendymolyneux')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/mdoukmas
|
huggingtweets
| 2022-06-11T19:35:54Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-11T19:34:24Z |
---
language: en
thumbnail: http://www.huggingtweets.com/mdoukmas/1654976150184/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1098660288193269762/n5v9daol_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Maya Dukmasova</div>
<div style="text-align: center; font-size: 14px;">@mdoukmas</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Maya Dukmasova.
| Data | Maya Dukmasova |
| --- | --- |
| Tweets downloaded | 3241 |
| Retweets | 896 |
| Short tweets | 158 |
| Tweets kept | 2187 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2jwhv7l5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mdoukmas's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/25v3pmsy) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/25v3pmsy/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mdoukmas')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
meghazisofiane/opus-mt-en-ar-evaluated-en-to-ar-1000instancesopus-leaningRate2e-05-batchSize8-11epoch-3
|
meghazisofiane
| 2022-06-11T19:25:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:opus100",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-11T19:16:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus100
metrics:
- bleu
model-index:
- name: opus-mt-en-ar-evaluated-en-to-ar-1000instancesopus-leaningRate2e-05-batchSize8-11epoch-3
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus100
type: opus100
args: ar-en
metrics:
- name: Bleu
type: bleu
value: 21.3028
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ar-evaluated-en-to-ar-1000instancesopus-leaningRate2e-05-batchSize8-11epoch-3
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the opus100 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1421
- Bleu: 21.3028
- Meteor: 0.1285
- Gen Len: 9.975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|
| 1.0508 | 1.0 | 100 | 0.1413 | 27.9009 | 0.1416 | 8.85 |
| 0.1253 | 2.0 | 200 | 0.1372 | 23.11 | 0.1345 | 9.855 |
| 0.1017 | 3.0 | 300 | 0.1390 | 21.7885 | 0.1364 | 9.97 |
| 0.0868 | 4.0 | 400 | 0.1378 | 21.3889 | 0.1314 | 9.835 |
| 0.0754 | 5.0 | 500 | 0.1398 | 22.198 | 0.132 | 9.675 |
| 0.0667 | 6.0 | 600 | 0.1396 | 20.8645 | 0.1308 | 10.055 |
| 0.0604 | 7.0 | 700 | 0.1408 | 20.289 | 0.1303 | 10.53 |
| 0.0553 | 8.0 | 800 | 0.1414 | 21.7023 | 0.1293 | 10.005 |
| 0.0518 | 9.0 | 900 | 0.1421 | 21.3028 | 0.1285 | 9.975 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Galeros/dqn-mountaincar-v0-zoo
|
Galeros
| 2022-06-11T19:21:02Z | 6 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-11T18:55:20Z |
---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -105.00 +/- 3.46
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
---
# **DQN** Agent playing **MountainCar-v0**
This is a trained model of a **DQN** agent playing **MountainCar-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ahmeddbahaa/t5-arabic-base-finetuned-xlsum-ar
|
ahmeddbahaa
| 2022-06-11T19:13:08Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"ar",
"abstractive summarization",
"xlsum",
"generated_from_trainer",
"dataset:xlsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-06-11T01:21:55Z |
---
license: apache-2.0
tags:
- summarization
- t5
- ar
- abstractive summarization
- xlsum
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: t5-arabic-base-finetuned-xlsum-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-arabic-base-finetuned-xlsum-ar
This model is a fine-tuned version of [bakrianoo/t5-arabic-base](https://huggingface.co/bakrianoo/t5-arabic-base) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0328
- Rouge-1: 23.72
- Rouge-2: 10.95
- Rouge-l: 21.59
- Gen Len: 19.0
- Bertscore: 71.81
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 10
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/elonmusk-iamjohnoliver-neiltyson
|
huggingtweets
| 2022-06-11T19:00:50Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-11T18:54:15Z |
---
language: en
thumbnail: http://www.huggingtweets.com/elonmusk-iamjohnoliver-neiltyson/1654974044761/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529956155937759233/Nyn1HZWF_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1393958859/main_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/74188698/NeilTysonOriginsA-Crop_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & John Oliver & Neil deGrasse Tyson</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-iamjohnoliver-neiltyson</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & John Oliver & Neil deGrasse Tyson.
| Data | Elon Musk | John Oliver | Neil deGrasse Tyson |
| --- | --- | --- | --- |
| Tweets downloaded | 3200 | 636 | 3237 |
| Retweets | 147 | 122 | 10 |
| Short tweets | 954 | 9 | 87 |
| Tweets kept | 2099 | 505 | 3140 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/14h905cr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-iamjohnoliver-neiltyson's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3gcc5ko3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3gcc5ko3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-iamjohnoliver-neiltyson')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Galeros/dqn-mountaincar-v0-local
|
Galeros
| 2022-06-11T18:38:27Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-11T18:38:19Z |
---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -98.80 +/- 21.88
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
---
# **DQN** Agent playing **MountainCar-v0**
This is a trained model of a **DQN** agent playing **MountainCar-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lllFaNToMlll/wac2vec-lllfantomlll
|
lllFaNToMlll
| 2022-06-11T18:07:44Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-11T11:42:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wac2vec-lllfantomlll
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wac2vec-lllfantomlll
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5560
- Wer: 0.3417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5768 | 1.0 | 500 | 2.0283 | 1.0238 |
| 0.9219 | 2.01 | 1000 | 0.5103 | 0.5022 |
| 0.4497 | 3.01 | 1500 | 0.4746 | 0.4669 |
| 0.3163 | 4.02 | 2000 | 0.4144 | 0.4229 |
| 0.2374 | 5.02 | 2500 | 0.4186 | 0.4161 |
| 0.2033 | 6.02 | 3000 | 0.4115 | 0.3975 |
| 0.1603 | 7.03 | 3500 | 0.4424 | 0.3817 |
| 0.1455 | 8.03 | 4000 | 0.4151 | 0.3918 |
| 0.1276 | 9.04 | 4500 | 0.4940 | 0.3798 |
| 0.108 | 10.04 | 5000 | 0.4580 | 0.3688 |
| 0.1053 | 11.04 | 5500 | 0.4243 | 0.3700 |
| 0.0929 | 12.05 | 6000 | 0.4999 | 0.3727 |
| 0.0896 | 13.05 | 6500 | 0.4991 | 0.3624 |
| 0.0748 | 14.06 | 7000 | 0.4924 | 0.3602 |
| 0.0681 | 15.06 | 7500 | 0.4908 | 0.3544 |
| 0.0619 | 16.06 | 8000 | 0.5021 | 0.3559 |
| 0.0569 | 17.07 | 8500 | 0.5448 | 0.3518 |
| 0.0549 | 18.07 | 9000 | 0.4919 | 0.3508 |
| 0.0478 | 19.08 | 9500 | 0.4704 | 0.3513 |
| 0.0437 | 20.08 | 10000 | 0.5058 | 0.3555 |
| 0.0421 | 21.08 | 10500 | 0.5127 | 0.3489 |
| 0.0362 | 22.09 | 11000 | 0.5439 | 0.3527 |
| 0.0322 | 23.09 | 11500 | 0.5418 | 0.3469 |
| 0.0327 | 24.1 | 12000 | 0.5298 | 0.3422 |
| 0.0292 | 25.1 | 12500 | 0.5511 | 0.3426 |
| 0.0246 | 26.1 | 13000 | 0.5349 | 0.3472 |
| 0.0251 | 27.11 | 13500 | 0.5646 | 0.3391 |
| 0.0214 | 28.11 | 14000 | 0.5821 | 0.3424 |
| 0.0217 | 29.12 | 14500 | 0.5560 | 0.3417 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
huggingnft/frames
|
huggingnft
| 2022-06-11T17:38:03Z | 5 | 0 |
transformers
|
[
"transformers",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"unconditional-image-generation",
"dataset:huggingnft/frames",
"license:mit",
"endpoints_compatible",
"region:us"
] |
unconditional-image-generation
| 2022-06-11T14:58:47Z |
---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
- unconditional-image-generation
datasets:
- huggingnft/frames
license: mit
---
# Hugging NFT: frames
## Disclaimer
All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright
holder.
## Model description
LightWeight GAN model for unconditional generation.
NFT collection available [here](https://opensea.io/collection/frames).
Dataset is available [here](https://huggingface.co/datasets/huggingnft/frames).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
Project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
[](https://github.com/AlekseyKorshuk/huggingnft)
## Intended uses & limitations
#### How to use
Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
#### Limitations and bias
Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
## Training data
Dataset is available [here](https://huggingface.co/datasets/huggingnft/frames).
## Training procedure
Training script is available [here](https://github.com/AlekseyKorshuk/huggingnft).
## Generated Images
Check results with Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
### BibTeX entry and citation info
```bibtex
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
|
DancingIguana/codeparrot-ds
|
DancingIguana
| 2022-06-11T16:58:04Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-08T21:56:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
bubblecookie/t5-small-finetuned-cnndm_trained
|
bubblecookie
| 2022-06-11T16:48:45Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-10T06:21:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: t5-small-finetuned-cnndm_trained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm_trained
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
robingeibel/longformer-base-finetuned-big_patent
|
robingeibel
| 2022-06-11T16:33:49Z | 62 | 1 |
transformers
|
[
"transformers",
"tf",
"longformer",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-05T17:24:27Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: robingeibel/longformer-base-finetuned-big_patent
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# robingeibel/longformer-base-finetuned-big_patent
This model is a fine-tuned version of [robingeibel/longformer-base-finetuned-big_patent](https://huggingface.co/robingeibel/longformer-base-finetuned-big_patent) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1860
- Validation Loss: 1.0692
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 152946, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.1860 | 1.0692 | 0 |
### Framework versions
- Transformers 4.19.4
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
IshanKumar/molecular_generation
|
IshanKumar
| 2022-06-11T14:27:39Z | 0 | 0 |
keras
|
[
"keras",
"tensorboard",
"tf-keras",
"mol_gen",
"region:us"
] | null | 2022-06-02T19:30:33Z |
---
library_name: keras
tags:
- mol_gen
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.0005, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
## Training Metrics
| Epochs | Train Loss |
|--- |--- |
| 1| 68866.578|
| 2| 68818.219|
| 3| 68850.844|
| 4| 68829.688|
| 5| 68840.258|
| 6| 68813.281|
| 7| 68809.414|
| 8| 68815.312|
| 9| 68805.641|
| 10| 68803.672|
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
YeRyeongLee/albert-base-v2-finetuned-filtered-0609
|
YeRyeongLee
| 2022-06-11T13:33:02Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-11T11:46:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: albert-base-v2-finetuned-filtered-0609
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-filtered-0609
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2062
- Accuracy: 0.9723
- Precision: 0.9724
- Recall: 0.9723
- F1: 0.9723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2688 | 1.0 | 3180 | 0.2282 | 0.9560 | 0.9577 | 0.9560 | 0.9562 |
| 0.2268 | 2.0 | 6360 | 0.1909 | 0.9638 | 0.9640 | 0.9638 | 0.9638 |
| 0.1831 | 3.0 | 9540 | 0.2590 | 0.9572 | 0.9584 | 0.9572 | 0.9572 |
| 0.1588 | 4.0 | 12720 | 0.1752 | 0.9673 | 0.9678 | 0.9673 | 0.9673 |
| 0.0972 | 5.0 | 15900 | 0.1868 | 0.9695 | 0.9696 | 0.9695 | 0.9695 |
| 0.0854 | 6.0 | 19080 | 0.2042 | 0.9701 | 0.9707 | 0.9701 | 0.9702 |
| 0.0599 | 7.0 | 22260 | 0.1793 | 0.9748 | 0.9749 | 0.9748 | 0.9749 |
| 0.0389 | 8.0 | 25440 | 0.1996 | 0.9742 | 0.9743 | 0.9742 | 0.9742 |
| 0.0202 | 9.0 | 28620 | 0.2188 | 0.9723 | 0.9726 | 0.9723 | 0.9724 |
| 0.0152 | 10.0 | 31800 | 0.2062 | 0.9723 | 0.9724 | 0.9723 | 0.9723 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.1+cu111
- Datasets 1.16.1
- Tokenizers 0.12.1
|
tclong/wav2vec2-base-vios-google-colab
|
tclong
| 2022-06-11T13:26:15Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-29T03:45:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-vios-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-vios-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5647
- Wer: 0.4970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.7292 | 2.0 | 500 | 3.4159 | 1.0 |
| 3.0762 | 4.0 | 1000 | 1.3005 | 0.9615 |
| 0.8812 | 6.0 | 1500 | 0.4664 | 0.4740 |
| 0.5076 | 8.0 | 2000 | 0.4101 | 0.4180 |
| 0.4075 | 10.0 | 2500 | 0.3815 | 0.3802 |
| 0.3724 | 12.0 | 3000 | 0.3785 | 0.3741 |
| 0.3762 | 14.0 | 3500 | 0.4404 | 0.3766 |
| 0.4541 | 16.0 | 4000 | 0.4671 | 0.3822 |
| 0.6391 | 18.0 | 4500 | 0.5643 | 0.4200 |
| 0.7681 | 20.0 | 5000 | 0.6564 | 0.5214 |
| 0.8131 | 22.0 | 5500 | 0.5786 | 0.4934 |
| 0.7448 | 24.0 | 6000 | 0.5561 | 0.4920 |
| 0.7337 | 26.0 | 6500 | 0.5631 | 0.4964 |
| 0.7359 | 28.0 | 7000 | 0.5647 | 0.4968 |
| 0.7397 | 30.0 | 7500 | 0.5647 | 0.4970 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
titi7242229/roberta-base-bne-finetuned_personality_multi_3
|
titi7242229
| 2022-06-11T13:13:47Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-11T07:10:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned_personality_multi_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned_personality_multi_3
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1145
- Accuracy: 0.4847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2498 | 1.0 | 63 | 2.2799 | 0.2236 |
| 2.3044 | 2.0 | 126 | 2.1644 | 0.2980 |
| 1.9017 | 3.0 | 189 | 1.9934 | 0.4127 |
| 2.2281 | 4.0 | 252 | 1.8517 | 0.4501 |
| 1.2955 | 5.0 | 315 | 1.7588 | 0.4870 |
| 1.221 | 6.0 | 378 | 1.7269 | 0.4888 |
| 1.1381 | 7.0 | 441 | 1.7617 | 0.4888 |
| 0.8415 | 8.0 | 504 | 1.8101 | 0.4853 |
| 0.6696 | 9.0 | 567 | 1.8325 | 0.4928 |
| 0.6646 | 10.0 | 630 | 1.8707 | 0.4841 |
| 0.3758 | 11.0 | 693 | 1.8766 | 0.4876 |
| 0.3477 | 12.0 | 756 | 1.9171 | 0.4905 |
| 0.2854 | 13.0 | 819 | 1.9203 | 0.4980 |
| 0.2713 | 14.0 | 882 | 2.0089 | 0.4813 |
| 0.3434 | 15.0 | 945 | 2.0130 | 0.4905 |
| 0.0758 | 16.0 | 1008 | 2.0230 | 0.4922 |
| 0.2518 | 17.0 | 1071 | 2.0793 | 0.4824 |
| 0.0783 | 18.0 | 1134 | 2.0920 | 0.4830 |
| 0.0933 | 19.0 | 1197 | 2.1067 | 0.4836 |
| 0.184 | 20.0 | 1260 | 2.1145 | 0.4847 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
louisdeco/camembert-base-finetuned-RankLineCause
|
louisdeco
| 2022-06-11T12:50:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-11T09:02:07Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
model-index:
- name: camembert-base-finetuned-RankLineCause
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-finetuned-RankLineCause
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3138
- Accuracy: 0.8152
- F1: 0.8297
- Recall: 0.8152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 50
- eval_batch_size: 50
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|
| 0.3471 | 1.0 | 10019 | 0.3191 | 0.8156 | 0.8137 | 0.8156 |
| 0.317 | 2.0 | 20038 | 0.3138 | 0.8152 | 0.8297 | 0.8152 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
DavidCollier/SpaceInvader
|
DavidCollier
| 2022-06-11T12:40:06Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-11T12:39:28Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 15.50 +/- 12.54
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DavidCollier -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga DavidCollier
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Sebabrata/lmv2ubiai-pan8doc-06-11
|
Sebabrata
| 2022-06-11T12:25:03Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-11T11:46:22Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: lmv2ubiai-pan8doc-06-11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmv2ubiai-pan8doc-06-11
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9633
- Dob Precision: 1.0
- Dob Recall: 1.0
- Dob F1: 1.0
- Dob Number: 2
- Fname Precision: 0.6667
- Fname Recall: 1.0
- Fname F1: 0.8
- Fname Number: 2
- Name Precision: 1.0
- Name Recall: 1.0
- Name F1: 1.0
- Name Number: 2
- Pan Precision: 1.0
- Pan Recall: 1.0
- Pan F1: 1.0
- Pan Number: 2
- Overall Precision: 0.8889
- Overall Recall: 1.0
- Overall F1: 0.9412
- Overall Accuracy: 0.9821
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Dob Precision | Dob Recall | Dob F1 | Dob Number | Fname Precision | Fname Recall | Fname F1 | Fname Number | Name Precision | Name Recall | Name F1 | Name Number | Pan Precision | Pan Recall | Pan F1 | Pan Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|:----------:|:------:|:----------:|:---------------:|:------------:|:--------:|:------------:|:--------------:|:-----------:|:-------:|:-----------:|:-------------:|:----------:|:------:|:----------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 2.1195 | 1.0 | 6 | 1.7519 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 0.7857 |
| 1.6994 | 2.0 | 12 | 1.5117 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 0.7857 |
| 1.5521 | 3.0 | 18 | 1.4130 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 0.7857 |
| 1.4726 | 4.0 | 24 | 1.3410 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 0.7857 |
| 1.395 | 5.0 | 30 | 1.2693 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 0.7857 |
| 1.3131 | 6.0 | 36 | 1.2079 | 1.0 | 1.0 | 1.0 | 2 | 0.1667 | 0.5 | 0.25 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.3 | 0.375 | 0.3333 | 0.8929 |
| 1.2474 | 7.0 | 42 | 1.1495 | 1.0 | 1.0 | 1.0 | 2 | 0.2 | 0.5 | 0.2857 | 2 | 0.0 | 0.0 | 0.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.4167 | 0.625 | 0.5 | 0.9286 |
| 1.1869 | 8.0 | 48 | 1.0942 | 1.0 | 1.0 | 1.0 | 2 | 0.2 | 0.5 | 0.2857 | 2 | 0.0 | 0.0 | 0.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.4167 | 0.625 | 0.5 | 0.9286 |
| 1.1369 | 9.0 | 54 | 1.0453 | 1.0 | 1.0 | 1.0 | 2 | 0.4 | 1.0 | 0.5714 | 2 | 0.0 | 0.0 | 0.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5455 | 0.75 | 0.6316 | 0.9464 |
| 1.0882 | 10.0 | 60 | 1.0054 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 1.0 | 0.6667 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.7 | 0.875 | 0.7778 | 0.9643 |
| 1.0482 | 11.0 | 66 | 0.9633 | 1.0 | 1.0 | 1.0 | 2 | 0.6667 | 1.0 | 0.8 | 2 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.8889 | 1.0 | 0.9412 | 0.9821 |
| 1.017 | 12.0 | 72 | 0.9368 | 1.0 | 1.0 | 1.0 | 2 | 0.6667 | 1.0 | 0.8 | 2 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.8889 | 1.0 | 0.9412 | 0.9643 |
| 0.9825 | 13.0 | 78 | 0.9139 | 1.0 | 1.0 | 1.0 | 2 | 0.6667 | 1.0 | 0.8 | 2 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.8889 | 1.0 | 0.9412 | 0.9821 |
| 0.9459 | 14.0 | 84 | 0.8837 | 1.0 | 1.0 | 1.0 | 2 | 0.6667 | 1.0 | 0.8 | 2 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.8889 | 1.0 | 0.9412 | 0.9643 |
| 0.9155 | 15.0 | 90 | 0.8472 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.8819 | 16.0 | 96 | 0.8231 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.8523 | 17.0 | 102 | 0.7957 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.6667 | 1.0 | 0.8 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.8889 | 1.0 | 0.9412 | 0.9821 |
| 0.8251 | 18.0 | 108 | 0.7681 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.7982 | 19.0 | 114 | 0.7533 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.7762 | 20.0 | 120 | 0.7283 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.7558 | 21.0 | 126 | 0.7114 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.7346 | 22.0 | 132 | 0.6889 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.7116 | 23.0 | 138 | 0.6697 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.6898 | 24.0 | 144 | 0.6593 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.6748 | 25.0 | 150 | 0.6356 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.6487 | 26.0 | 156 | 0.6142 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.6312 | 27.0 | 162 | 0.6008 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.6156 | 28.0 | 168 | 0.5855 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.5961 | 29.0 | 174 | 0.5625 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.5781 | 30.0 | 180 | 0.5553 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Theivaprakasham/layoutlmv3-finetuned-wildreceipt
|
Theivaprakasham
| 2022-06-11T09:14:40Z | 28 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:wild_receipt",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-11T07:21:14Z |
---
tags:
- generated_from_trainer
datasets:
- wild_receipt
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-wildreceipt
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wild_receipt
type: wild_receipt
args: WildReceipt
metrics:
- name: Precision
type: precision
value: 0.877212237618329
- name: Recall
type: recall
value: 0.8798678959680749
- name: F1
type: f1
value: 0.8785380599065679
- name: Accuracy
type: accuracy
value: 0.9249204782274871
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-wildreceipt
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the wild_receipt dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3108
- Precision: 0.8772
- Recall: 0.8799
- F1: 0.8785
- Accuracy: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
The WildReceipt dataset consists of 1740 receipt images, and contains 25 key information categories, and a total of about 69000 text boxes. 1268 and 472 images are used for training and testing respectively to train the LayoutLMv3 model for Key Information Extraction.
## Training procedure
The training code: https://github.com/Theivaprakasham/layoutlmv3/blob/main/training_codes/LayoutLMv3_training_WildReceipts_dataset.ipynb
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.32 | 100 | 1.3143 | 0.6709 | 0.2679 | 0.3829 | 0.6700 |
| No log | 0.63 | 200 | 0.8814 | 0.6478 | 0.5195 | 0.5766 | 0.7786 |
| No log | 0.95 | 300 | 0.6568 | 0.7205 | 0.6491 | 0.6829 | 0.8303 |
| No log | 1.26 | 400 | 0.5618 | 0.7544 | 0.7072 | 0.7300 | 0.8519 |
| 1.0284 | 1.58 | 500 | 0.5003 | 0.7802 | 0.7566 | 0.7682 | 0.8687 |
| 1.0284 | 1.89 | 600 | 0.4454 | 0.7941 | 0.7679 | 0.7807 | 0.8748 |
| 1.0284 | 2.21 | 700 | 0.4314 | 0.8142 | 0.7928 | 0.8033 | 0.8852 |
| 1.0284 | 2.52 | 800 | 0.3870 | 0.8172 | 0.8200 | 0.8186 | 0.8953 |
| 1.0284 | 2.84 | 900 | 0.3629 | 0.8288 | 0.8369 | 0.8329 | 0.9025 |
| 0.4167 | 3.15 | 1000 | 0.3537 | 0.8540 | 0.8200 | 0.8366 | 0.9052 |
| 0.4167 | 3.47 | 1100 | 0.3383 | 0.8438 | 0.8285 | 0.8361 | 0.9063 |
| 0.4167 | 3.79 | 1200 | 0.3403 | 0.8297 | 0.8493 | 0.8394 | 0.9062 |
| 0.4167 | 4.1 | 1300 | 0.3271 | 0.8428 | 0.8545 | 0.8487 | 0.9110 |
| 0.4167 | 4.42 | 1400 | 0.3182 | 0.8491 | 0.8518 | 0.8504 | 0.9131 |
| 0.2766 | 4.73 | 1500 | 0.3111 | 0.8491 | 0.8539 | 0.8515 | 0.9129 |
| 0.2766 | 5.05 | 1600 | 0.3177 | 0.8397 | 0.8620 | 0.8507 | 0.9124 |
| 0.2766 | 5.36 | 1700 | 0.3091 | 0.8676 | 0.8548 | 0.8612 | 0.9191 |
| 0.2766 | 5.68 | 1800 | 0.3080 | 0.8508 | 0.8645 | 0.8576 | 0.9162 |
| 0.2766 | 5.99 | 1900 | 0.3059 | 0.8492 | 0.8662 | 0.8576 | 0.9163 |
| 0.2114 | 6.31 | 2000 | 0.3184 | 0.8536 | 0.8657 | 0.8596 | 0.9147 |
| 0.2114 | 6.62 | 2100 | 0.3161 | 0.8583 | 0.8713 | 0.8648 | 0.9184 |
| 0.2114 | 6.94 | 2200 | 0.3055 | 0.8707 | 0.8682 | 0.8694 | 0.9220 |
| 0.2114 | 7.26 | 2300 | 0.3004 | 0.8689 | 0.8745 | 0.8717 | 0.9219 |
| 0.2114 | 7.57 | 2400 | 0.3111 | 0.8701 | 0.8720 | 0.8711 | 0.9211 |
| 0.174 | 7.89 | 2500 | 0.3130 | 0.8599 | 0.8741 | 0.8669 | 0.9198 |
| 0.174 | 8.2 | 2600 | 0.3034 | 0.8661 | 0.8748 | 0.8704 | 0.9219 |
| 0.174 | 8.52 | 2700 | 0.3005 | 0.8799 | 0.8673 | 0.8736 | 0.9225 |
| 0.174 | 8.83 | 2800 | 0.3043 | 0.8687 | 0.8804 | 0.8745 | 0.9240 |
| 0.174 | 9.15 | 2900 | 0.3121 | 0.8776 | 0.8704 | 0.8740 | 0.9242 |
| 0.1412 | 9.46 | 3000 | 0.3131 | 0.8631 | 0.8755 | 0.8692 | 0.9204 |
| 0.1412 | 9.78 | 3100 | 0.3067 | 0.8715 | 0.8773 | 0.8744 | 0.9233 |
| 0.1412 | 10.09 | 3200 | 0.3021 | 0.8751 | 0.8812 | 0.8782 | 0.9248 |
| 0.1412 | 10.41 | 3300 | 0.3092 | 0.8651 | 0.8808 | 0.8729 | 0.9228 |
| 0.1412 | 10.73 | 3400 | 0.3084 | 0.8776 | 0.8749 | 0.8762 | 0.9237 |
| 0.1254 | 11.04 | 3500 | 0.3156 | 0.8738 | 0.8785 | 0.8761 | 0.9237 |
| 0.1254 | 11.36 | 3600 | 0.3131 | 0.8723 | 0.8818 | 0.8770 | 0.9244 |
| 0.1254 | 11.67 | 3700 | 0.3108 | 0.8778 | 0.8781 | 0.8780 | 0.9250 |
| 0.1254 | 11.99 | 3800 | 0.3097 | 0.8778 | 0.8771 | 0.8775 | 0.9239 |
| 0.1254 | 12.3 | 3900 | 0.3115 | 0.8785 | 0.8801 | 0.8793 | 0.9251 |
| 0.111 | 12.62 | 4000 | 0.3108 | 0.8772 | 0.8799 | 0.8785 | 0.9249 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Gbartee/Gbartee2
|
Gbartee
| 2022-06-11T08:57:03Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-06-11T08:57:03Z |
---
license: bigscience-bloom-rail-1.0
---
|
orzhan/t5-long-extract
|
orzhan
| 2022-06-11T07:20:59Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"feature-extraction",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
T5-small model fine-tuned for extractive summarization on long documents.
Repository: [GitHub](https://github.com/orzhan/t5-long-extract)
|
titi7242229/roberta-base-bne-finetuned_personality_multi_2
|
titi7242229
| 2022-06-11T06:21:27Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-11T05:27:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned_personality_multi_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned_personality_multi_2
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2983
- Accuracy: 0.5429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.3256 | 1.0 | 125 | 2.2642 | 0.2161 |
| 1.815 | 2.0 | 250 | 1.9569 | 0.3919 |
| 1.614 | 3.0 | 375 | 1.7264 | 0.5014 |
| 1.1718 | 4.0 | 500 | 1.6387 | 0.5239 |
| 1.135 | 5.0 | 625 | 1.6259 | 0.5245 |
| 0.5637 | 6.0 | 750 | 1.6443 | 0.5372 |
| 0.3672 | 7.0 | 875 | 1.7146 | 0.5326 |
| 0.3249 | 8.0 | 1000 | 1.8099 | 0.5297 |
| 0.1791 | 9.0 | 1125 | 1.8888 | 0.5285 |
| 0.2175 | 10.0 | 1250 | 1.9228 | 0.5326 |
| 0.0465 | 11.0 | 1375 | 1.9753 | 0.5435 |
| 0.1154 | 12.0 | 1500 | 2.1102 | 0.5256 |
| 0.0745 | 13.0 | 1625 | 2.1319 | 0.5429 |
| 0.0281 | 14.0 | 1750 | 2.1743 | 0.5360 |
| 0.0173 | 15.0 | 1875 | 2.2087 | 0.5441 |
| 0.0269 | 16.0 | 2000 | 2.2456 | 0.5424 |
| 0.0107 | 17.0 | 2125 | 2.2685 | 0.5458 |
| 0.0268 | 18.0 | 2250 | 2.2893 | 0.5383 |
| 0.0245 | 19.0 | 2375 | 2.2943 | 0.5418 |
| 0.0156 | 20.0 | 2500 | 2.2983 | 0.5429 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
AryaSuprana/BRATA_RoBERTaBali
|
AryaSuprana
| 2022-06-11T05:01:40Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"ban",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-11T04:51:40Z |
---
language: "ban"
datasets:
- WikiBali
- Suara Saking Bali
widget:
- text: "Kalsium silih <mask> datu kimia antuk simbol Ca miwah wilangan atom 20."
example_title: "Conto 1"
- text: "Tabuan inggih <mask> silih tunggil soroh beburon sane madue kampid."
example_title: "Conto 2"
---
BRATA (Basa Bali Used for Pretraining RoBERTa) is a pretrained language model trained using Basa Bali or Balinese Language with RoBERTa-base-uncased configuration. The datasets used for this pretraining were collected by extracting WikiBali or Wikipedia Basa Bali and some sources from Suara Saking Bali website. The pretrained language model trained using Google Colab Pro with Tesla P100-PCIE-16GB GPU. Pretraining process used 200 epoch and 2 batch size. The smallest training loss can be seen in Training metrics or Metrics tab.
|
ablam/distilgpt2_fine_tuned_gcode
|
ablam
| 2022-06-11T03:52:00Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-11T01:09:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: distilgpt2_fine_tuned_gcode
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2_fine_tuned_gcode
This model is a fine-tuned version of [congcongwang/distilgpt2_fine_tuned_coder](https://huggingface.co/congcongwang/distilgpt2_fine_tuned_coder) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.1754 | 1.0 | 52144 | 4.1670 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 2.1.0
- Tokenizers 0.10.3
|
huggingtweets/froliki2108
|
huggingtweets
| 2022-06-11T00:04:16Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-11T00:02:55Z |
---
language: en
thumbnail: http://www.huggingtweets.com/froliki2108/1654905851117/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1447692349493100549/1PV2c-PJ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Froliki💉💉💉</div>
<div style="text-align: center; font-size: 14px;">@froliki2108</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Froliki💉💉💉.
| Data | Froliki💉💉💉 |
| --- | --- |
| Tweets downloaded | 2223 |
| Retweets | 1133 |
| Short tweets | 229 |
| Tweets kept | 861 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2tug3miv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @froliki2108's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3otsf5pj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3otsf5pj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/froliki2108')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/theanything_bot
|
huggingtweets
| 2022-06-10T23:19:47Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-10T23:19:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/theanything_bot/1654903166604/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1532874424776437760/vSP1qWyF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Anything Bot</div>
<div style="text-align: center; font-size: 14px;">@theanything_bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Anything Bot.
| Data | Anything Bot |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 0 |
| Short tweets | 0 |
| Tweets kept | 3250 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/oy5g644b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @theanything_bot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2rui0vn2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2rui0vn2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/theanything_bot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/jedwill1999
|
huggingtweets
| 2022-06-10T23:10:10Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-10T23:09:22Z |
---
language: en
thumbnail: http://www.huggingtweets.com/jedwill1999/1654902604867/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1510152678919135250/lfEmlEGJ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">a local</div>
<div style="text-align: center; font-size: 14px;">@jedwill1999</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from a local.
| Data | a local |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 1080 |
| Short tweets | 525 |
| Tweets kept | 1641 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1qsnsp6t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jedwill1999's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/mjjc73pu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/mjjc73pu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jedwill1999')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
public-data/MangaLineExtraction_PyTorch
|
public-data
| 2022-06-10T23:01:13Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-06-10T22:58:25Z |
# MangaLineExtraction_PyTorch
- https://github.com/ljsabc/MangaLineExtraction_PyTorch
|
facebook/roberta-hate-speech-dynabench-r2-target
|
facebook
| 2022-06-10T22:36:17Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"arxiv:2012.15761",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-10T21:52:46Z |
---
language: en
---
# LFTW R2 Target
The R2 Target model from [Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection](https://arxiv.org/abs/2012.15761)
## Citation Information
```bibtex
@inproceedings{vidgen2021lftw,
title={Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection},
author={Bertie Vidgen and Tristan Thrush and Zeerak Waseem and Douwe Kiela},
booktitle={ACL},
year={2021}
}
```
Thanks to Kushal Tirumala and Adina Williams for helping the authors put the model on the hub!
|
mmillet/distilrubert-tiny-2ndfinetune-epru
|
mmillet
| 2022-06-10T20:46:22Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-10T20:41:13Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilrubert-tiny-2ndfinetune-epru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilrubert-tiny-2ndfinetune-epru
This model is a fine-tuned version of [mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented](https://huggingface.co/mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2085
- Accuracy: 0.9333
- F1: 0.9319
- Precision: 0.9336
- Recall: 0.9333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.4825 | 1.0 | 13 | 0.2988 | 0.8848 | 0.8827 | 0.9056 | 0.8848 |
| 0.2652 | 2.0 | 26 | 0.2435 | 0.9212 | 0.9216 | 0.9282 | 0.9212 |
| 0.168 | 3.0 | 39 | 0.2120 | 0.9515 | 0.9501 | 0.9524 | 0.9515 |
| 0.1593 | 4.0 | 52 | 0.1962 | 0.9333 | 0.9330 | 0.9366 | 0.9333 |
| 0.1294 | 5.0 | 65 | 0.1855 | 0.9333 | 0.9334 | 0.9355 | 0.9333 |
| 0.1065 | 6.0 | 78 | 0.1780 | 0.9394 | 0.9393 | 0.9399 | 0.9394 |
| 0.0908 | 7.0 | 91 | 0.1967 | 0.9394 | 0.9388 | 0.9388 | 0.9394 |
| 0.0432 | 8.0 | 104 | 0.2085 | 0.9333 | 0.9319 | 0.9336 | 0.9333 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
torli/trijki
|
torli
| 2022-06-10T20:45:14Z | 0 | 1 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2022-06-10T20:43:32Z |
---
license: artistic-2.0
---
git lfs install
git clone https://huggingface.co/torli/trijki
|
huggingtweets/ninjasexparty
|
huggingtweets
| 2022-06-10T19:56:27Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-10T19:56:18Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1446572046679302144/jF9HS_Yd_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ninja Sex Party</div>
<div style="text-align: center; font-size: 14px;">@ninjasexparty</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ninja Sex Party.
| Data | Ninja Sex Party |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 631 |
| Short tweets | 439 |
| Tweets kept | 2180 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ik0ji2l/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ninjasexparty's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1jyhmzsa) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1jyhmzsa/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ninjasexparty')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
FritzOS/TEdetection_distilBERT_mLM_V5
|
FritzOS
| 2022-06-10T19:43:24Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-10T19:43:11Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TEdetection_distilBERT_mLM_V5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TEdetection_distilBERT_mLM_V5
This model is a fine-tuned version of [FritzOS/TEdetection_distiBERT_mLM_V2](https://huggingface.co/FritzOS/TEdetection_distiBERT_mLM_V2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 208018, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.19.3
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/smallmutuals
|
huggingtweets
| 2022-06-10T19:13:07Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-10T18:33:00Z |
---
language: en
thumbnail: http://www.huggingtweets.com/smallmutuals/1654888348503/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1433527116948180999/wejtDhFm_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Cool Owl Guy</div>
<div style="text-align: center; font-size: 14px;">@smallmutuals</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Cool Owl Guy.
| Data | Cool Owl Guy |
| --- | --- |
| Tweets downloaded | 367 |
| Retweets | 45 |
| Short tweets | 25 |
| Tweets kept | 297 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/238iiiu5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @smallmutuals's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2hl8vi9y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2hl8vi9y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/smallmutuals')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/malzliebchen
|
huggingtweets
| 2022-06-10T18:29:39Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-10T18:26:43Z |
---
language: en
thumbnail: http://www.huggingtweets.com/malzliebchen/1654885748305/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1521909233024913408/4QsF2YzM_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Malzbeard's Severed Head</div>
<div style="text-align: center; font-size: 14px;">@malzliebchen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Malzbeard's Severed Head.
| Data | Malzbeard's Severed Head |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 41 |
| Short tweets | 486 |
| Tweets kept | 2720 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/e1wzn1e5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @malzliebchen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/38g20s6n) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/38g20s6n/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/malzliebchen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
meln1k/dqn-SpaceInvadersNoFrameskip-v4
|
meln1k
| 2022-06-10T17:30:42Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-10T17:30:14Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 817.50 +/- 327.32
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga meln1k -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga meln1k
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
income/bpr-base-msmarco-contriever
|
income
| 2022-06-10T17:16:00Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-10T17:11:14Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 6653 with parameters:
```
{'batch_size': 75, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`bpr_loss.BPRLossFunction`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ksabeh/bert-base-uncased-attribute-correction-mlm-titles
|
ksabeh
| 2022-06-10T15:50:17Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-10T09:02:24Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ksabeh/bert-base-uncased-attribute-correction-mlm-titles
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ksabeh/bert-base-uncased-attribute-correction-mlm-titles
This model is a fine-tuned version of [ksabeh/bert-base-uncased-attribute-correction-mlm](https://huggingface.co/ksabeh/bert-base-uncased-attribute-correction-mlm) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0430
- Validation Loss: 0.0625
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 23878, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1429 | 0.0743 | 0 |
| 0.0430 | 0.0625 | 1 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Clody0071/distilbert-base-multilingual-cased-finetuned-similarite
|
Clody0071
| 2022-06-10T15:25:52Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:pawsx",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-10T14:33:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pawsx
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-multilingual-cased-finetuned-similarite
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: pawsx
type: pawsx
args: fr
metrics:
- name: Accuracy
type: accuracy
value: 0.7995
- name: F1
type: f1
value: 0.7994565743967147
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-similarite
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the pawsx dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4781
- Accuracy: 0.7995
- F1: 0.7995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5343 | 1.0 | 772 | 0.4879 | 0.7705 | 0.7714 |
| 0.3523 | 2.0 | 1544 | 0.4781 | 0.7995 | 0.7995 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
adalbertojunior/clip-rpt
|
adalbertojunior
| 2022-06-10T14:35:02Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-text-dual-encoder",
"feature-extraction",
"generated_from_trainer",
"dataset:ydshieh/coco_dataset_script",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-06-10T12:46:52Z |
---
tags:
- generated_from_trainer
datasets:
- ydshieh/coco_dataset_script
model-index:
- name: clip-roberta-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-roberta-finetuned
This model is a fine-tuned version of [./models/clip-roberta](https://huggingface.co/./models/clip-roberta) on the ydshieh/coco_dataset_script 2017 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
ahmeddbahaa/mT5_multilingual_XLSum-finetuned-wikilingua-ar
|
ahmeddbahaa
| 2022-06-10T14:19:32Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"mT5_multilingual_XLSum",
"abstractive summarization",
"ar",
"generated_from_trainer",
"dataset:wiki_lingua",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-06-10T02:47:03Z |
---
tags:
- summarization
- mT5_multilingual_XLSum
- mt5
- abstractive summarization
- ar
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: mT5_multilingual_XLSum-finetuned-wikilingua-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-wikilingua-ar
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5540
- Rouge-1: 27.46
- Rouge-2: 9.0
- Rouge-l: 22.59
- Gen Len: 43.41
- Bertscore: 73.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/atrioc
|
huggingtweets
| 2022-06-10T09:05:36Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-10T08:58:33Z |
---
language: en
thumbnail: http://www.huggingtweets.com/atrioc/1654851931751/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1522249702837657603/1jNZf3aB_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Atrioc</div>
<div style="text-align: center; font-size: 14px;">@atrioc</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Atrioc.
| Data | Atrioc |
| --- | --- |
| Tweets downloaded | 3205 |
| Retweets | 746 |
| Short tweets | 502 |
| Tweets kept | 1957 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2zlbp16x/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @atrioc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3oldn78j) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3oldn78j/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/atrioc')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
TurkuNLP/bert-large-finnish-cased-v1
|
TurkuNLP
| 2022-06-10T08:46:17Z | 152 | 2 |
transformers
|
[
"transformers",
"pytorch",
"fi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-06-10T07:53:16Z |
---
license: apache-2.0
language: fi
---
This is the large variant of FinBERT (TurkuNLP/bert-base-finnish-cased-v1). The training data is exactly the same.
|
Intel/MiniLM-L12-H384-uncased-mrpc
|
Intel
| 2022-06-10T07:06:45Z | 220 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-10T06:55:25Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: MiniLM-L12-H384-uncased-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.875
- name: F1
type: f1
value: 0.9097345132743363
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLM-L12-H384-uncased-mrpc
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4319
- Accuracy: 0.875
- F1: 0.9097
- Combined Score: 0.8924
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
flood/pegasus-samsum
|
flood
| 2022-06-10T07:00:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-10T06:24:51Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7052 | 0.54 | 500 | 1.4814 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/macarena_olona
|
huggingtweets
| 2022-06-10T06:32:02Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-10T06:10:00Z |
---
language: en
thumbnail: http://www.huggingtweets.com/macarena_olona/1654842717478/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1535020786007916545/po7DO1ln_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Macarena Olona</div>
<div style="text-align: center; font-size: 14px;">@macarena_olona</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Macarena Olona.
| Data | Macarena Olona |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 1797 |
| Short tweets | 225 |
| Tweets kept | 1223 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yx7hguo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @macarena_olona's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2i64c9y6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2i64c9y6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/macarena_olona')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
twieland/MIX1_ja-en_helsinki
|
twieland
| 2022-06-10T05:49:30Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-09T13:37:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MIX1_ja-en_helsinki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MIX1_ja-en_helsinki
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on a combination of Visual Novel, Light Novel, and Subtitle data. A total of ~10MM lines of training data were used.
It achieves the following results on the evaluation set:
- Loss: 1.7947
- Otaku Benchmark VN BLEU: 17.78
- Otaku Benchmark LN BLEU: 11.80
- Otaku Benchmark MANGA BLEU: 13.66
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.7495 | 0.01 | 2000 | 2.5989 |
| 2.5415 | 0.03 | 4000 | 2.4746 |
| 2.4409 | 0.04 | 6000 | 2.4731 |
| 2.3743 | 0.05 | 8000 | 2.4012 |
| 2.3254 | 0.06 | 10000 | 2.3904 |
| 2.2857 | 0.08 | 12000 | 2.3649 |
| 2.2448 | 0.09 | 14000 | 2.3188 |
| 2.2158 | 0.1 | 16000 | 2.2975 |
| 2.193 | 0.11 | 18000 | 2.2756 |
| 2.1669 | 0.13 | 20000 | 2.2852 |
| 2.144 | 0.14 | 22000 | 2.2689 |
| 2.1222 | 0.15 | 24000 | 2.2721 |
| 2.1045 | 0.16 | 26000 | 2.2489 |
| 2.0885 | 0.18 | 28000 | 2.2359 |
| 2.0732 | 0.19 | 30000 | 2.2771 |
| 2.0584 | 0.2 | 32000 | 2.2582 |
| 2.0471 | 0.21 | 34000 | 2.2093 |
| 2.0369 | 0.23 | 36000 | 2.1768 |
| 2.0241 | 0.24 | 38000 | 2.1884 |
| 2.0196 | 0.25 | 40000 | 2.2025 |
| 2.004 | 0.27 | 42000 | 2.1507 |
| 1.9936 | 0.28 | 44000 | 2.1668 |
| 1.9869 | 0.29 | 46000 | 2.1432 |
| 1.9735 | 0.3 | 48000 | 2.1662 |
| 1.9651 | 0.32 | 50000 | 2.1824 |
| 1.9551 | 0.33 | 52000 | 2.1608 |
| 1.9485 | 0.34 | 54000 | 2.1322 |
| 1.9421 | 0.35 | 56000 | 2.1476 |
| 1.9303 | 0.37 | 58000 | 2.0994 |
| 1.9236 | 0.38 | 60000 | 2.1182 |
| 1.9183 | 0.39 | 62000 | 2.1305 |
| 1.9108 | 0.4 | 64000 | 2.1469 |
| 1.9051 | 0.42 | 66000 | 2.1414 |
| 1.9018 | 0.43 | 68000 | 2.1089 |
| 1.8959 | 0.44 | 70000 | 2.0908 |
| 1.886 | 0.46 | 72000 | 2.0968 |
| 1.8802 | 0.47 | 74000 | 2.0503 |
| 1.8713 | 0.48 | 76000 | 2.0542 |
| 1.8648 | 0.49 | 78000 | 2.0990 |
| 1.8599 | 0.51 | 80000 | 2.1112 |
| 1.8563 | 0.52 | 82000 | 2.1007 |
| 1.8541 | 0.53 | 84000 | 2.0849 |
| 1.845 | 0.54 | 86000 | 2.0831 |
| 1.8448 | 0.56 | 88000 | 2.0560 |
| 1.8342 | 0.57 | 90000 | 2.0349 |
| 1.8344 | 0.58 | 92000 | 2.0301 |
| 1.8291 | 0.59 | 94000 | 2.0300 |
| 1.819 | 0.61 | 96000 | 2.0378 |
| 1.8154 | 0.62 | 98000 | 2.0197 |
| 1.82 | 0.63 | 100000 | 2.0463 |
| 1.8081 | 0.64 | 102000 | 2.0077 |
| 1.8046 | 0.66 | 104000 | 2.0101 |
| 1.7978 | 0.67 | 106000 | 2.0150 |
| 1.7934 | 0.68 | 108000 | 2.0215 |
| 1.7904 | 0.7 | 110000 | 2.0278 |
| 1.7871 | 0.71 | 112000 | 2.0588 |
| 1.779 | 0.72 | 114000 | 2.0062 |
| 1.7784 | 0.73 | 116000 | 2.0300 |
| 1.7749 | 0.75 | 118000 | 1.9664 |
| 1.7691 | 0.76 | 120000 | 2.0033 |
| 1.7622 | 0.77 | 122000 | 1.9983 |
| 1.7587 | 0.78 | 124000 | 2.0030 |
| 1.755 | 0.8 | 126000 | 1.9955 |
| 1.7531 | 0.81 | 128000 | 1.9764 |
| 1.7439 | 0.82 | 130000 | 1.9942 |
| 1.7406 | 0.83 | 132000 | 2.0221 |
| 1.7385 | 0.85 | 134000 | 1.9835 |
| 1.7332 | 0.86 | 136000 | 1.9967 |
| 1.7332 | 0.87 | 138000 | 2.0247 |
| 1.7309 | 0.88 | 140000 | 1.9817 |
| 1.7248 | 0.9 | 142000 | 2.0063 |
| 1.7209 | 0.91 | 144000 | 1.9583 |
| 1.7154 | 0.92 | 146000 | 1.9779 |
| 1.7153 | 0.94 | 148000 | 1.9478 |
| 1.7094 | 0.95 | 150000 | 1.9706 |
| 1.7061 | 0.96 | 152000 | 1.9605 |
| 1.7017 | 0.97 | 154000 | 1.9447 |
| 1.6965 | 0.99 | 156000 | 1.9419 |
| 1.6929 | 1.0 | 158000 | 1.9589 |
| 1.6628 | 1.01 | 160000 | 1.9383 |
| 1.6535 | 1.02 | 162000 | 1.9487 |
| 1.6495 | 1.04 | 164000 | 1.9400 |
| 1.6516 | 1.05 | 166000 | 1.9353 |
| 1.6513 | 1.06 | 168000 | 1.9253 |
| 1.6518 | 1.07 | 170000 | 1.9132 |
| 1.6491 | 1.09 | 172000 | 1.9076 |
| 1.6453 | 1.1 | 174000 | 1.9192 |
| 1.6426 | 1.11 | 176000 | 1.9191 |
| 1.6353 | 1.13 | 178000 | 1.9367 |
| 1.6352 | 1.14 | 180000 | 1.9218 |
| 1.6304 | 1.15 | 182000 | 1.9305 |
| 1.6299 | 1.16 | 184000 | 1.9072 |
| 1.6263 | 1.18 | 186000 | 1.9211 |
| 1.6284 | 1.19 | 188000 | 1.9037 |
| 1.6237 | 1.2 | 190000 | 1.8951 |
| 1.6231 | 1.21 | 192000 | 1.8998 |
| 1.6184 | 1.23 | 194000 | 1.8960 |
| 1.6153 | 1.24 | 196000 | 1.8776 |
| 1.6122 | 1.25 | 198000 | 1.8747 |
| 1.6109 | 1.26 | 200000 | 1.8951 |
| 1.6072 | 1.28 | 202000 | 1.8705 |
| 1.6094 | 1.29 | 204000 | 1.8903 |
| 1.6063 | 1.3 | 206000 | 1.8660 |
| 1.599 | 1.31 | 208000 | 1.8696 |
| 1.5931 | 1.33 | 210000 | 1.8598 |
| 1.5943 | 1.34 | 212000 | 1.8760 |
| 1.5906 | 1.35 | 214000 | 1.8833 |
| 1.5858 | 1.37 | 216000 | 1.8645 |
| 1.5873 | 1.38 | 218000 | 1.8620 |
| 1.5842 | 1.39 | 220000 | 1.8632 |
| 1.5808 | 1.4 | 222000 | 1.8782 |
| 1.5756 | 1.42 | 224000 | 1.8627 |
| 1.5728 | 1.43 | 226000 | 1.8649 |
| 1.5709 | 1.44 | 228000 | 1.8735 |
| 1.5704 | 1.45 | 230000 | 1.8630 |
| 1.5659 | 1.47 | 232000 | 1.8598 |
| 1.5637 | 1.48 | 234000 | 1.8519 |
| 1.5628 | 1.49 | 236000 | 1.8569 |
| 1.5559 | 1.5 | 238000 | 1.8401 |
| 1.5532 | 1.52 | 240000 | 1.8528 |
| 1.557 | 1.53 | 242000 | 1.8637 |
| 1.5499 | 1.54 | 244000 | 1.8701 |
| 1.5476 | 1.55 | 246000 | 1.8423 |
| 1.5502 | 1.57 | 248000 | 1.8320 |
| 1.5469 | 1.58 | 250000 | 1.8542 |
| 1.5382 | 1.59 | 252000 | 1.8526 |
| 1.5396 | 1.61 | 254000 | 1.8537 |
| 1.528 | 1.62 | 256000 | 1.8248 |
| 1.532 | 1.63 | 258000 | 1.8322 |
| 1.5269 | 1.64 | 260000 | 1.8381 |
| 1.5269 | 1.66 | 262000 | 1.8389 |
| 1.5269 | 1.67 | 264000 | 1.8445 |
| 1.525 | 1.68 | 266000 | 1.8232 |
| 1.5175 | 1.69 | 268000 | 1.8561 |
| 1.5172 | 1.71 | 270000 | 1.8342 |
| 1.5174 | 1.72 | 272000 | 1.8167 |
| 1.5114 | 1.73 | 274000 | 1.8281 |
| 1.5094 | 1.74 | 276000 | 1.8164 |
| 1.5083 | 1.76 | 278000 | 1.8317 |
| 1.5047 | 1.77 | 280000 | 1.8207 |
| 1.5045 | 1.78 | 282000 | 1.8155 |
| 1.497 | 1.8 | 284000 | 1.8275 |
| 1.4996 | 1.81 | 286000 | 1.8152 |
| 1.497 | 1.82 | 288000 | 1.8137 |
| 1.4967 | 1.83 | 290000 | 1.8109 |
| 1.4936 | 1.85 | 292000 | 1.8037 |
| 1.4867 | 1.86 | 294000 | 1.7955 |
| 1.4859 | 1.87 | 296000 | 1.8181 |
| 1.4869 | 1.88 | 298000 | 1.7999 |
| 1.4811 | 1.9 | 300000 | 1.8062 |
| 1.4831 | 1.91 | 302000 | 1.8042 |
| 1.4791 | 1.92 | 304000 | 1.8020 |
| 1.4797 | 1.93 | 306000 | 1.7972 |
| 1.483 | 1.95 | 308000 | 1.8044 |
| 1.4748 | 1.96 | 310000 | 1.8036 |
| 1.4772 | 1.97 | 312000 | 1.7958 |
| 1.4708 | 1.98 | 314000 | 1.7967 |
| 1.4743 | 2.0 | 316000 | 1.7947 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/wickdedaccount
|
huggingtweets
| 2022-06-10T02:20:32Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-10T02:17:51Z |
---
language: en
thumbnail: http://www.huggingtweets.com/wickdedaccount/1654827628283/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1353151127026597889/Yarj5Kfr_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">pp</div>
<div style="text-align: center; font-size: 14px;">@wickdedaccount</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from pp.
| Data | pp |
| --- | --- |
| Tweets downloaded | 1028 |
| Retweets | 822 |
| Short tweets | 119 |
| Tweets kept | 87 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1of8kmw1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wickdedaccount's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2q4m95l8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2q4m95l8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wickdedaccount')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ExusAI/SRWNN
|
ExusAI
| 2022-06-10T00:54:14Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-06-10T00:45:58Z |
---
license: mit
---
Super resolution model for anime and illustrations based on vgg11 and waifu2x. This model was trained on around 10k high resolution images (at least HD)
https://github.com/Exusai/SuperResolutionWaifuNN
|
nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_7.csv___topic_text_google_mt5_base
|
nestoralvaro
| 2022-06-10T00:52:35Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-09T23:49:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_7.csv___topic_text_google_mt5_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_7.csv___topic_text_google_mt5_base
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 2.8146
- Rouge2: 0.6707
- Rougel: 2.8187
- Rougelsum: 2.8098
- Gen Len: 6.4901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 3869 | nan | 2.8146 | 0.6707 | 2.8187 | 2.8098 | 6.4901 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Birb80/Bird
|
Birb80
| 2022-06-09T21:17:59Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-06-09T21:17:59Z |
---
license: bigscience-bloom-rail-1.0
---
|
fbadine/uk_ireland_accent_classification
|
fbadine
| 2022-06-09T20:07:40Z | 8 | 1 |
tf-keras
|
[
"tf-keras",
"tensorboard",
"license:apache-2.0",
"region:us"
] | null | 2022-03-09T16:53:02Z |
---
license: apache-2.0
---
## UK & Ireland Accent Classification Model
This model classifies UK & Ireland accents using feature extraction from [Yamnet](https://tfhub.dev/google/yamnet/1).
### Yamnet Model
Yamnet is an audio event classifier trained on the AudioSet dataset to predict audio events from the AudioSet ontology. It is available on TensorFlow Hub.
Yamnet accepts a 1-D tensor of audio samples with a sample rate of 16 kHz.
As output, the model returns a 3-tuple:
- Scores of shape `(N, 521)` representing the scores of the 521 classes.
- Embeddings of shape `(N, 1024)`.
- The log-mel spectrogram of the entire audio frame.
We will use the embeddings, which are the features extracted from the audio samples, as the input to our dense model.
For more detailed information about Yamnet, please refer to its [TensorFlow Hub](https://tfhub.dev/google/yamnet/1) page.
### Dense Model
The dense model that we used consists of:
- An input layer which is embedding output of the Yamnet classifier.
- 4 dense hidden layers and 4 dropout layers.
- An output dense layer.
<details>
<summary>View Model Plot</summary>

</details>
---
## Results
The model achieved the following results:
Results | Training | Validation
-----------|-----------|------------
Accuracy | 55% | 51%
AUC | 0.9090 | 0.8911
d-prime | 1.887 | 1.743
And the confusion matrix for the validation set is:

---
## Dataset
The dataset used is the
[Crowdsourced high-quality UK and Ireland English Dialect speech data set](https://openslr.org/83/)
which consists of a total of 17,877 high-quality audio wav files.
This dataset includes over 31 hours of recording from 120 vounteers who self-identify as
native speakers of Southern England, Midlands, Northern England, Wales, Scotland and Ireland.
For more info, please refer to the above link or to the following paper:
[Open-source Multi-speaker Corpora of the English Accents in the British Isles](https://aclanthology.org/2020.lrec-1.804.pdf)
---
## How to use
Having already installed `huggingface_hub` using:
`pip install -U -q huggingface_hub`
Use the following in your code:
`from huggingface_hub import from_pretrained_keras`
`model = from_pretrained_keras("fbadine/uk_ireland_accent_classification")`
---
## Demo
A demo is available in [HuggingFace Spaces](https://huggingface.co/spaces/fbadine/uk_ireland_accent_classification)
|
huggingtweets/midudev
|
huggingtweets
| 2022-06-09T18:48:30Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-09T18:33:17Z |
---
language: en
thumbnail: http://www.huggingtweets.com/midudev/1654800505422/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1526668354609680384/r85fytOs_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🔴 EN DIRECTO twitch.tv/midudev</div>
<div style="text-align: center; font-size: 14px;">@midudev</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🔴 EN DIRECTO twitch.tv/midudev.
| Data | 🔴 EN DIRECTO twitch.tv/midudev |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 824 |
| Short tweets | 163 |
| Tweets kept | 2259 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/11iwoc6b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @midudev's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/s48ktc1m) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/s48ktc1m/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/midudev')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
bookpanda/wangchanberta-base-att-spm-uncased-finetuned-imdb
|
bookpanda
| 2022-06-09T18:17:16Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"camembert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-28T08:22:04Z |
---
tags:
- generated_from_trainer
model-index:
- name: wangchanberta-base-att-spm-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wangchanberta-base-att-spm-uncased-finetuned-imdb
This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.1831 | 1.0 | 4826 | 0.1542 |
| 0.1 | 2.0 | 9652 | 0.1075 |
| 0.0946 | 3.0 | 14478 | 0.0443 |
| 0.0618 | 4.0 | 19304 | 0.0830 |
| 0.0783 | 5.0 | 24130 | 0.0810 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
nbroad/jplu-xlm-r-ner-40-lang
|
nbroad
| 2022-06-09T17:51:49Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-27T15:22:16Z |
pytorch version of [jplu/tf-xlm-r-ner-40-lang](https://huggingface.co/jplu/tf-xlm-r-ner-40-lang)
|
kabelomalapane/En-Ts
|
kabelomalapane
| 2022-06-09T17:33:20Z | 69 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-06-09T16:33:13Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: En-Ts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# En-Ts
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ts](https://huggingface.co/Helsinki-NLP/opus-mt-en-ts) on the None dataset.
It achieves the following results on the evaluation set:
Before training:
- Loss: 3.17
- Bleu: 14.513
After Training
- Loss: 1.3320
- Bleu: 36.7687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 1.7082 | 1.0 | 5929 | 1.6902 | 32.1311 |
| 1.4606 | 2.0 | 11858 | 1.4996 | 34.1129 |
| 1.3182 | 3.0 | 17787 | 1.4107 | 35.7428 |
| 1.2543 | 4.0 | 23716 | 1.3631 | 36.2009 |
| 1.2116 | 5.0 | 29645 | 1.3389 | 36.5876 |
| 1.1723 | 6.0 | 35574 | 1.3320 | 36.7481 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
veb/twitch-distilbert-base-uncased-finetuned-sst-2-english
|
veb
| 2022-06-09T17:33:12Z | 7 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-09T16:58:37Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: veb/twitch-distilbert-base-uncased-finetuned-sst-2-english
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# veb/twitch-distilbert-base-uncased-finetuned-sst-2-english
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3074
- Train Sparse Categorical Accuracy: 0.9219
- Validation Loss: 0.1151
- Validation Sparse Categorical Accuracy: 1.0
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 1.0992 | 0.6094 | 0.3072 | 1.0 | 0 |
| 0.3921 | 0.7812 | 0.2903 | 1.0 | 1 |
| 0.3074 | 0.9219 | 0.1151 | 1.0 | 2 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.7.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ajtamayoh/NLP-CIC-WFU_Clinical_Cases_NER_Sents_Tokenized_bertin_roberta_base_spanish_fine_tuned
|
ajtamayoh
| 2022-06-09T17:15:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-09T16:33:08Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NLP-CIC-WFU_Clinical_Cases_NER_Sents_Tokenized_bertin_roberta_base_spanish_fine_tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-CIC-WFU_Clinical_Cases_NER_Sents_Tokenized_bertin_roberta_base_spanish_fine_tuned
This model is a fine-tuned version of [bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0973
- Precision: 0.9012
- Recall: 0.6942
- F1: 0.7842
- Accuracy: 0.9857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0605 | 1.0 | 2568 | 0.0625 | 0.9400 | 0.6322 | 0.7560 | 0.9836 |
| 0.0475 | 2.0 | 5136 | 0.0622 | 0.9533 | 0.6572 | 0.7781 | 0.9849 |
| 0.0374 | 3.0 | 7704 | 0.0552 | 0.9261 | 0.6784 | 0.7831 | 0.9855 |
| 0.0246 | 4.0 | 10272 | 0.0693 | 0.9381 | 0.6658 | 0.7788 | 0.9849 |
| 0.0126 | 5.0 | 12840 | 0.0974 | 0.8918 | 0.6830 | 0.7735 | 0.9849 |
| 0.0061 | 6.0 | 15408 | 0.0886 | 0.8771 | 0.7099 | 0.7847 | 0.9850 |
| 0.0031 | 7.0 | 17976 | 0.0973 | 0.9012 | 0.6942 | 0.7842 | 0.9857 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
GioReg/notiBERTo
|
GioReg
| 2022-06-09T17:08:29Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-04-07T14:24:36Z |
language:
- it
Si è creato un modello, chiamato notiBERTo, svolgendo la fase di addestramento e utilizzando per la creazione e il tuning dei pesi del modello l’algoritmo non supervisionato di masked-language modeling (MLM); questo non richiede l’utilizzo di testo con etichettatura. L’idea e stata quella di ottenere un modello BERT-based per la lingua italiana focalizzato sul linguaggio tipico utilizzato nei contesti dell’informazione giornalistica online che quindi potesse ricalcare lo stile, il lessico della stampa.
Per i dati in input sono stati utilizzati database disponibili pubblicamente online organizzati dal portale “Wortschatz Leipzig” dell’universita di Lipsia. Il portale offre l’accesso ai “corpora collection Leipzig” dove si trovano 900 collezioni testuali divise per lingua - le lingue presenti sono 250 - e argomento, ottenuti principalmente attraverso data crawling dei siti internet. In particolare sono stati scelti database di collezioni di notizie ottenute attraverso feeds RSS rac colte su base giornaliera e database ottenuti attraverso crawling dai principali siti internet di notizie italiane, suddivisi in sottodatabase in base agli anni di raccolta. Per la creazione di “notiBERTo” sono stati utilizzati database relativi agli anni 2018, 2019, 2020 per un totale di circa 700MB.
|
huggingtweets/medscape
|
huggingtweets
| 2022-06-09T16:30:23Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-09T16:29:41Z |
---
language: en
thumbnail: http://www.huggingtweets.com/medscape/1654792218439/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1401919208133378050/l2MKtnC7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Medscape</div>
<div style="text-align: center; font-size: 14px;">@medscape</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Medscape.
| Data | Medscape |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 16 |
| Short tweets | 2 |
| Tweets kept | 3232 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/mn0jpyr0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @medscape's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3n6qbw51) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3n6qbw51/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/medscape')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/sorcehri
|
huggingtweets
| 2022-06-09T16:22:35Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-09T16:20:26Z |
---
language: en
thumbnail: http://www.huggingtweets.com/sorcehri/1654791699329/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1511431988720414730/A1kqPr25_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ehri</div>
<div style="text-align: center; font-size: 14px;">@sorcehri</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ehri.
| Data | ehri |
| --- | --- |
| Tweets downloaded | 3233 |
| Retweets | 280 |
| Short tweets | 837 |
| Tweets kept | 2116 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1gn4h8q0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sorcehri's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/7zs978ln) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/7zs978ln/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sorcehri')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ksabeh/roberta-base-attribute-correction-mlm-titles
|
ksabeh
| 2022-06-09T15:44:28Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-09T08:42:02Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: ksabeh/roberta-base-attribute-correction-mlm-titles-2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ksabeh/roberta-base-attribute-correction-mlm-titles-2
This model is a fine-tuned version of [ksabeh/roberta-base-attribute-correction-mlm](https://huggingface.co/ksabeh/roberta-base-attribute-correction-mlm) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0822
- Validation Loss: 0.0914
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 23870, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2007 | 0.1023 | 0 |
| 0.0822 | 0.0914 | 1 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Khaled002/Yy
|
Khaled002
| 2022-06-09T14:22:32Z | 0 | 0 | null |
[
"license:bsd-3-clause-clear",
"region:us"
] | null | 2022-06-09T14:22:32Z |
---
license: bsd-3-clause-clear
---
|
sschellhammer/SciTweets_SciBert
|
sschellhammer
| 2022-06-09T14:03:30Z | 97 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-04T06:16:44Z |
---
license: cc-by-4.0
widget:
- text: "Study: Shifts in electricity generation spur net job growth, but coal jobs decline - via @DukeU https://www.eurekalert.org/news-releases/637217"
example_title: "All categories"
- text: "Shifts in electricity generation spur net job growth, but coal jobs decline"
example_title: "Only Cat 1.1"
- text: "Study on impacts of electricity generation shift via @DukeU https://www.eurekalert.org/news-releases/637217"
example_title: "Only Cat 1.2 and 1.3"
- text: "@DukeU received grant for research on electricity generation shift"
example_title: "Only Cat 1.3"
---
This SciBert-based multi-label classifier, trained as part of the work "SciTweets - A Dataset and Annotation Framework for Detecting Scientific Online Discourse", distinguishes three different forms of science-relatedness for Tweets. See details at https://github.com/AI-4-Sci/SciTweets .
|
YeRyeongLee/electra-base-discriminator-finetuned-filtered-0609
|
YeRyeongLee
| 2022-06-09T14:00:07Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-09T07:24:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: electra-base-discriminator-finetuned-filtered-0609
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-discriminator-finetuned-filtered-0609
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1933
- Accuracy: 0.9745
- Precision: 0.9747
- Recall: 0.9745
- F1: 0.9746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.238 | 1.0 | 3180 | 0.1861 | 0.9682 | 0.9686 | 0.9682 | 0.9682 |
| 0.1827 | 2.0 | 6360 | 0.2262 | 0.9645 | 0.9648 | 0.9645 | 0.9644 |
| 0.1326 | 3.0 | 9540 | 0.1904 | 0.9711 | 0.9716 | 0.9711 | 0.9712 |
| 0.1575 | 4.0 | 12720 | 0.2065 | 0.9676 | 0.9680 | 0.9676 | 0.9676 |
| 0.1224 | 5.0 | 15900 | 0.2666 | 0.9557 | 0.9571 | 0.9557 | 0.9558 |
| 0.1083 | 6.0 | 19080 | 0.1697 | 0.9752 | 0.9754 | 0.9752 | 0.9752 |
| 0.0792 | 7.0 | 22260 | 0.1684 | 0.9742 | 0.9744 | 0.9742 | 0.9742 |
| 0.0751 | 8.0 | 25440 | 0.1784 | 0.9723 | 0.9726 | 0.9723 | 0.9723 |
| 0.0572 | 9.0 | 28620 | 0.1868 | 0.9736 | 0.9737 | 0.9736 | 0.9736 |
| 0.0593 | 10.0 | 31800 | 0.1933 | 0.9745 | 0.9747 | 0.9745 | 0.9746 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.1+cu111
- Datasets 1.16.1
- Tokenizers 0.12.1
|
Nehc/FakeMobile
|
Nehc
| 2022-06-09T13:44:35Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-07T18:05:08Z |
---
language:
- ru
widget:
- text: "[CLS] Какая абонентская плата на тарифе Позвони маме? [SEP]"
metrics:
- loss: 0.704381
- accuracy: 1.000000
---
Start from 'DeepPavlov/rubert-base-cased' and finetuning on DUMBOT fake data (http://dumbot.ru/Home/MobileOperatorRate).
100 epoch
on progress...
|
i8pxgd2s/q-Taxi-v3
|
i8pxgd2s
| 2022-06-09T13:26:49Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-09T13:26:40Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="i8pxgd2s/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
qualitydatalab/autotrain-car-review-project-966432121
|
qualitydatalab
| 2022-06-09T13:04:21Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"en",
"dataset:qualitydatalab/autotrain-data-car-review-project",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-09T12:30:26Z |
---
tags: autotrain
language: en
widget:
- text: "I love driving this car"
datasets:
- qualitydatalab/autotrain-data-car-review-project
co2_eq_emissions: 0.21529888368377176
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 966432121
- CO2 Emissions (in grams): 0.21529888368377176
## Validation Metrics
- Loss: 0.6013365983963013
- Accuracy: 0.737791286727457
- Macro F1: 0.729171012281939
- Micro F1: 0.737791286727457
- Weighted F1: 0.729171012281939
- Macro Precision: 0.7313770127538427
- Micro Precision: 0.737791286727457
- Weighted Precision: 0.7313770127538428
- Macro Recall: 0.737791286727457
- Micro Recall: 0.737791286727457
- Weighted Recall: 0.737791286727457
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love driving this car"}' https://api-inference.huggingface.co/models/qualitydatalab/autotrain-car-review-project-966432121
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("qualitydatalab/autotrain-car-review-project-966432121", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("qualitydatalab/autotrain-car-review-project-966432121", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
huggingtweets/zaidalyafeai
|
huggingtweets
| 2022-06-09T13:03:12Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-09T13:02:27Z |
---
language: en
thumbnail: http://www.huggingtweets.com/zaidalyafeai/1654779787447/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1521723273922461696/m8_zotM4_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Zaid زيد</div>
<div style="text-align: center; font-size: 14px;">@zaidalyafeai</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Zaid زيد.
| Data | Zaid زيد |
| --- | --- |
| Tweets downloaded | 2295 |
| Retweets | 74 |
| Short tweets | 217 |
| Tweets kept | 2004 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/39e5cxbb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zaidalyafeai's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2uc681wq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2uc681wq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/zaidalyafeai')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/bbclaurakt
|
huggingtweets
| 2022-06-09T12:48:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-09T12:47:22Z |
---
language: en
thumbnail: http://www.huggingtweets.com/bbclaurakt/1654778894531/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1533553176619716608/4klYwjkC_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Laura Kuenssberg Translator</div>
<div style="text-align: center; font-size: 14px;">@bbclaurakt</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Laura Kuenssberg Translator.
| Data | Laura Kuenssberg Translator |
| --- | --- |
| Tweets downloaded | 2063 |
| Retweets | 23 |
| Short tweets | 135 |
| Tweets kept | 1905 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/37mk0av7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bbclaurakt's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3a8gt7bb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3a8gt7bb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bbclaurakt')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.