modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-28 18:27:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 501
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-28 18:25:37
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
huggingtweets/dodecahedra | huggingtweets | 2022-06-12T17:42:15Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-12T17:37:18Z | ---
language: en
thumbnail: http://www.huggingtweets.com/dodecahedra/1655055731499/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/3232494514/760c72bca0af20fac2cd61bcec557e7a_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">William Rose</div>
<div style="text-align: center; font-size: 14px;">@dodecahedra</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from William Rose.
| Data | William Rose |
| --- | --- |
| Tweets downloaded | 3241 |
| Retweets | 1115 |
| Short tweets | 158 |
| Tweets kept | 1968 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1geru0ac/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dodecahedra's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1uy1zk82) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1uy1zk82/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dodecahedra')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nlokam99/ada_sample_2 | nlokam99 | 2022-06-12T17:40:42Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-12T17:38:56Z | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
--- |
obokkkk/kc-bert_finetuned_unsmile | obokkkk | 2022-06-12T17:22:32Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-12T14:39:40Z | ---
tags:
- generated_from_trainer
model-index:
- name: kc-bert_finetuned_unsmile
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kc-bert_finetuned_unsmile
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1326
- Lrap: 0.8753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Lrap |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 235 | 0.1458 | 0.8612 |
| No log | 2.0 | 470 | 0.1280 | 0.8738 |
| 0.1685 | 3.0 | 705 | 0.1257 | 0.8791 |
| 0.1685 | 4.0 | 940 | 0.1281 | 0.8777 |
| 0.0774 | 5.0 | 1175 | 0.1326 | 0.8753 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.12.1
|
comodoro/SpaceInvadersNoFrameskip-v4 | comodoro | 2022-06-12T16:55:50Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-12T16:55:06Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 680.00 +/- 211.93
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga comodoro -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga comodoro
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
vasudevgupta/speech_jax_wav2vec2-large-lv60_960h | vasudevgupta | 2022-06-12T16:10:32Z | 7 | 0 | transformers | [
"transformers",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-05-29T20:52:47Z | * Evaluation Notebook: https://colab.research.google.com/drive/1dV1Z3WajMCYMjNZab98CEEcg3FTbtONO?usp=sharing
* Training Code: https://github.com/vasudevgupta7/speech-jax/blob/main/projects/finetune_wav2vec2.py
* Weights & Biases: https://wandb.ai/7vasudevgupta/speech-JAX?workspace=user-7vasudevgupta
Following results are obtained with `23ffe236840b7f75c9f01a9c347b01485a2bf9f6` & `95c3bc1b83c74452df29f792e0b5651c09fdaeb9`
| dataset | WER |
|------------------------|-------|
| Librispeech-test-clean | 3.3 % | |
kravchenko/uk-mt5-base | kravchenko | 2022-06-12T14:57:59Z | 14 | 4 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"t5",
"uk",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-03T09:41:33Z | ---
language:
- uk
- en
tags:
- t5
---
The aim is to compress the mT5-base model to leave only the Ukrainian language and some basic English.
Reproduced the similar result (but with another language) from [this](https://towardsdatascience.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90) medium article.
Results:
- 582M params -> 244M params (58%)
- 250K tokens -> 30K tokens
- 2.2GB size model -> 0.95GB size model |
jianyang/q-Taxi-v3 | jianyang | 2022-06-12T14:13:37Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-12T14:13:31Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jianyang/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
ahmeddbahaa/mt5-base-finetune-ar-xlsum | ahmeddbahaa | 2022-06-12T13:55:10Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"mT5_multilingual_XLSum",
"abstractive summarization",
"ar",
"xlsum",
"generated_from_trainer",
"dataset:xlsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-06-11T20:41:00Z | ---
license: apache-2.0
tags:
- summarization
- mT5_multilingual_XLSum
- mt5
- abstractive summarization
- ar
- xlsum
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: mt5-base-finetune-ar-xlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetune-ar-xlsum
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2546
- Rouge-1: 22.2
- Rouge-2: 9.57
- Rouge-l: 20.26
- Gen Len: 19.0
- Bertscore: 71.43
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 10
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 4.9261 | 1.0 | 585 | 3.6314 | 18.19 | 6.49 | 16.37 | 19.0 | 70.17 |
| 3.8429 | 2.0 | 1170 | 3.4253 | 19.45 | 7.58 | 17.73 | 19.0 | 70.35 |
| 3.6311 | 3.0 | 1755 | 3.3569 | 20.83 | 8.54 | 18.9 | 19.0 | 70.89 |
| 3.4917 | 4.0 | 2340 | 3.3101 | 20.77 | 8.53 | 18.89 | 19.0 | 70.98 |
| 3.3873 | 5.0 | 2925 | 3.2867 | 21.47 | 9.0 | 19.54 | 19.0 | 71.23 |
| 3.3037 | 6.0 | 3510 | 3.2693 | 21.41 | 9.0 | 19.5 | 19.0 | 71.21 |
| 3.2357 | 7.0 | 4095 | 3.2581 | 22.05 | 9.36 | 20.04 | 19.0 | 71.43 |
| 3.1798 | 8.0 | 4680 | 3.2522 | 22.21 | 9.56 | 20.23 | 19.0 | 71.41 |
| 3.1359 | 9.0 | 5265 | 3.2546 | 22.27 | 9.58 | 20.23 | 19.0 | 71.46 |
| 3.0997 | 10.0 | 5850 | 3.2546 | 22.2 | 9.57 | 20.26 | 19.0 | 71.43 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
keras-io/ProbabalisticBayesianModel-Wine | keras-io | 2022-06-12T13:54:27Z | 0 | 2 | keras | [
"keras",
"tensorboard",
"probabilistic-models",
"regression",
"region:us"
] | null | 2022-06-06T15:36:50Z | ---
library_name: keras
tags:
- probabilistic-models
- regression
---
## Model description
This repo contains model weights for the the probabilistic model from [Probabilistic Bayesian Neural Networks](https://keras.io/examples/keras_recipes/bayesian_neural_networks/). This example demonstrates how to build basic probabilistic Bayesian neural networks to account for these two types of uncertainty. We use TensorFlow Probability library, which is compatible with Keras API.
Taking a probabilistic approach to deep learning allows to account for uncertainty, so that models can assign less levels of confidence to incorrect predictions. Sources of uncertainty can be found in the data, due to measurement error or noise in the labels, or the model, due to insufficient data availability for the model to learn effectively.
**Full credits go to [Khalid Salama](https://www.linkedin.com/in/khalid-salama-24403144/)**
## Using this model
This repo contains model weights only. To use this model, refer to the following code contained in load_bnn_model.py.
## Training and evaluation data π·
We use the wine quality dataset found [here](https://www.tensorflow.org/datasets/catalog/wine_quality). Each wine was scored from 0-10 by wine experts, and includes 11 physicochemical features about the wine.
## Versioning
The training was done using TensorFlow 2.8.0 and TensorFlow Probability 0.16.0. When working with TensorFlow Probability, it is encouraged to check out the [releases](https://github.com/tensorflow/probability/releases/tag/v0.17.0) to make sure you are using a stable TensorFlow counterpart.
### Training hyperparameters
| Optimizer | learning_rate | decay | rho | momentum | epsilon | centered | training_precision |
|----|-------------|-----|------|------|-------|-------|------------------|
|RMSprop|0.001|0.0|0.9|0.0|1e-07|False|float32|
|
nestoralvaro/mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t55_403.csv__google_mt5_base | nestoralvaro | 2022-06-12T12:25:16Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-12T10:01:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t55_403.csv__google_mt5_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t55_403.csv__google_mt5_base
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.9712
- Rouge2: 0.1329
- Rougel: 0.9638
- Rougelsum: 0.9675
- Gen Len: 6.4489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 36479 | nan | 0.9712 | 0.1329 | 0.9638 | 0.9675 | 6.4489 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
FabianWillner/distilbert-base-uncased-finetuned-squad | FabianWillner | 2022-06-12T12:09:32Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-09T10:41:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
metrics:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [FabianWillner/distilbert-base-uncased-finetuned-squad](https://huggingface.co/FabianWillner/distilbert-base-uncased-finetuned-squad) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingnft/hedgies | huggingnft | 2022-06-12T12:08:25Z | 7 | 0 | transformers | [
"transformers",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"unconditional-image-generation",
"dataset:huggingnft/hedgies",
"license:mit",
"endpoints_compatible",
"region:us"
] | unconditional-image-generation | 2022-05-24T18:12:29Z | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
- unconditional-image-generation
datasets:
- huggingnft/hedgies
license: mit
---
# Hugging NFT: hedgies
## Disclaimer
All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright
holder.
## Model description
LightWeight GAN model for unconditional generation.
NFT collection available [here](https://opensea.io/collection/hedgies).
Dataset is available [here](https://huggingface.co/datasets/huggingnft/hedgies).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
Project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
[](https://github.com/AlekseyKorshuk/huggingnft)
## Intended uses & limitations
#### How to use
Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
#### Limitations and bias
Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
## Training data
Dataset is available [here](https://huggingface.co/datasets/huggingnft/hedgies).
## Training procedure
Training script is available [here](https://github.com/AlekseyKorshuk/huggingnft).
## Generated Images
Check results with Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
### BibTeX entry and citation info
```bibtex
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
|
vishvamahadevan/distilbert-base-uncased-finetuned-squad | vishvamahadevan | 2022-06-12T10:34:52Z | 6 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-06-12T08:07:48Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: vishvamahadevan/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vishvamahadevan/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9560
- Validation Loss: 1.1174
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.3862 | 1.1639 | 0 |
| 0.9560 | 1.1174 | 1 |
### Framework versions
- Transformers 4.19.4
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/manfightdragon | huggingtweets | 2022-06-12T10:26:35Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-12T10:23:38Z | ---
language: en
thumbnail: http://www.huggingtweets.com/manfightdragon/1655029573001/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1184073162520031232/V6DOEeLp_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lance McDonald</div>
<div style="text-align: center; font-size: 14px;">@manfightdragon</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lance McDonald.
| Data | Lance McDonald |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 209 |
| Short tweets | 214 |
| Tweets kept | 2826 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3pc794z5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @manfightdragon's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2t8940p5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2t8940p5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/manfightdragon')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/bosstjanz | huggingtweets | 2022-06-12T09:27:34Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-12T09:26:54Z | ---
language: en
thumbnail: http://www.huggingtweets.com/bosstjanz/1655026050127/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1342130927737176064/SiNG_CxQ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ZrimΕ‘kow</div>
<div style="text-align: center; font-size: 14px;">@bosstjanz</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ZrimΕ‘kow.
| Data | ZrimΕ‘kow |
| --- | --- |
| Tweets downloaded | 3225 |
| Retweets | 368 |
| Short tweets | 279 |
| Tweets kept | 2578 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/23nemiqj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bosstjanz's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2pjrymzt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2pjrymzt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bosstjanz')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ironbar/dqn-SpaceInvadersNoFrameskip-v4-1M-steps | ironbar | 2022-06-12T08:16:08Z | 11 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-12T08:15:30Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 629.50 +/- 140.06
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ironbar -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ironbar
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
MyMild/finetune_iapp_thaiqa | MyMild | 2022-06-12T07:52:39Z | 57 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"camembert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-06-11T23:05:08Z | ---
tags:
- generated_from_trainer
model-index:
- name: finetune_iapp_thaiqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune_iapp_thaiqa
This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.10.3
|
spuun/kekbot-mini | spuun | 2022-06-12T05:53:59Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-12T03:40:33Z | ---
language:
- en
metrics:
- accuracy
co2_eq_emissions:
emissions: "10"
source: "mlco2.github.io"
training_type: "fine-tuning"
geographical_location: "West Java, Indonesia"
hardware_used: "1 T4"
license: cc-by-nc-sa-4.0
widget:
- text: 'You: "Hey kekbot! Whats up?"\nKekbot: "'
example_title: "Asking what's up"
- text: 'You: "Hey kekbot! How r u?"\nKekbot: "'
example_title: "Asking how he is"
---
> THIS MODEL IS INTENDED FOR RESEARCH PURPOSES ONLY
# Kekbot Mini
Based on a `distilgpt2` model, fine-tuned to a select subset (65k<= messages) of Art Union's general-chat channel chat history.
### Limits and biases
As this is trained on chat history, it is possible that discriminatory or even offensive materials to be outputted.
Author holds his ground on the fact that ML models are mere statistical representation of the dataset used to train it,
and that due to the nature of the dataset it is practically impossible to be certain of
the degree of "cleanliness" that the data contained within holds.
Author can confirm, however, that from heuristical testing that the model was not found to be offensive
to the author himself, hopefully this opinion stays true for everyone in the audience.
|
xdai/mimic_roberta_base | xdai | 2022-06-12T04:51:26Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"Clinical notes",
"Discharge summaries",
"RoBERTa",
"dataset:MIMIC-III",
"arxiv:2204.06683",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-06-12T04:12:20Z | ---
language:
- English
tags:
- Clinical notes
- Discharge summaries
- RoBERTa
license: "cc-by-4.0"
datasets:
- MIMIC-III
---
* Continue pre-training RoBERTa-base using discharge summaries from MIMIC-III datasets.
* Details can be found in the following paper
> Xiang Dai and Ilias Chalkidis and Sune Darkner and Desmond Elliott. 2022. Revisiting Transformer-based Models for Long Document Classification. (https://arxiv.org/abs/2204.06683)
* Important hyper-parameters
| | |
|---|---|
| Max sequence | 128 |
| Batch size | 128 |
| Learning rate | 5e-5 |
| Training epochs | 15 |
| Training time | 40 GPU-hours | |
huggingtweets/tayplaysgaymes | huggingtweets | 2022-06-12T03:56:41Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-12T03:55:39Z | ---
language: en
thumbnail: http://www.huggingtweets.com/tayplaysgaymes/1655006196516/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1144053838459969536/lv3yBmoX_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tay</div>
<div style="text-align: center; font-size: 14px;">@tayplaysgaymes</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tay.
| Data | Tay |
| --- | --- |
| Tweets downloaded | 3212 |
| Retweets | 693 |
| Short tweets | 367 |
| Tweets kept | 2152 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1hmextiq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tayplaysgaymes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3r0cse8x) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3r0cse8x/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tayplaysgaymes')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
bguan/SpaceInvadersNoFrameskip-v4 | bguan | 2022-06-12T01:05:09Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-12T01:04:38Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 255.00 +/- 93.83
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bguan -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga bguan
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 500000),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
TencentMedicalNet/MedicalNet-Resnet10 | TencentMedicalNet | 2022-06-12T00:26:42Z | 0 | 4 | null | [
"MedicalNet",
"medical images",
"medical",
"3D",
"Med3D",
"en",
"dataset:MRBrainS18",
"arxiv:1904.00625",
"license:mit",
"region:us"
] | null | 2022-06-11T23:12:06Z | ---
license: mit
datasets:
- MRBrainS18
language:
- en
metrics:
-
tags:
- MedicalNet
- medical images
- medical
- 3D
- Med3D
thumbnail: "https://github.com/Tencent/MedicalNet/blob/master/images/logo.png?raw=true"
---
# MedicalNet
This repository contains a Pytorch implementation of [Med3D: Transfer Learning for 3D Medical Image Analysis](https://arxiv.org/abs/1904.00625).
Many studies have shown that the performance on deep learning is significantly affected by volume of training data. The MedicalNet project aggregated the dataset with diverse modalities, target organs, and pathologies to to build relatively large datasets. Based on this dataset, a series of 3D-ResNet pre-trained models and corresponding transfer-learning training code are provided.
### License
MedicalNet is released under the MIT License (refer to the LICENSE file for detailso).
### Citing MedicalNet
If you use this code or pre-trained models, please cite the following:
```
@article{chen2019med3d,
title={Med3D: Transfer Learning for 3D Medical Image Analysis},
author={Chen, Sihong and Ma, Kai and Zheng, Yefeng},
journal={arXiv preprint arXiv:1904.00625},
year={2019}
}
```
### Update(2019/07/30)
We uploaded 4 pre-trained models based on more datasets (23 datasets).
```
Model name : parameters settings
resnet_10_23dataset.pth: --model resnet --model_depth 10 --resnet_shortcut B
resnet_18_23dataset.pth: --model resnet --model_depth 18 --resnet_shortcut A
resnet_34_23dataset.pth: --model resnet --model_depth 34 --resnet_shortcut A
resnet_50_23dataset.pth: --model resnet --model_depth 50 --resnet_shortcut B
```
Hugging Face repository contribution by:
[Rafael Zimmer](https://www.github.com/rzimmerdev) |
huggingtweets/laserboat999 | huggingtweets | 2022-06-11T23:53:52Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-11T23:49:07Z | ---
language: en
thumbnail: http://www.huggingtweets.com/laserboat999/1654991516445/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1500274766195793921/bA4siut7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">donald boat</div>
<div style="text-align: center; font-size: 14px;">@laserboat999</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from donald boat.
| Data | donald boat |
| --- | --- |
| Tweets downloaded | 3233 |
| Retweets | 75 |
| Short tweets | 516 |
| Tweets kept | 2642 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/38v40fpf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @laserboat999's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/pk1xum9h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/pk1xum9h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/laserboat999')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
DLWCMD/TEST2ppo-LunarLander-v2 | DLWCMD | 2022-06-11T23:39:16Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-11T23:38:43Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 263.13 +/- 22.16
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
745H1N/LunarLander-v2-DQN-optuna | 745H1N | 2022-06-11T23:36:51Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-11T23:36:25Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -140.18 +/- 41.67
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aprischa/bart-large-cnn-aprischa2 | aprischa | 2022-06-11T23:27:38Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-11T17:40:18Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-aprischa2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-aprischa2
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3425
- Rouge1: 65.7088
- Rouge2: 56.6701
- Rougel: 62.1926
- Rougelsum: 64.7727
- Gen Len: 140.8469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 0.3772 | 1.0 | 5403 | 0.3586 | 65.7702 | 56.7968 | 62.264 | 64.8605 | 140.268 |
| 0.316 | 2.0 | 10806 | 0.3421 | 64.8238 | 55.8837 | 61.3245 | 63.8894 | 140.7472 |
| 0.2397 | 3.0 | 16209 | 0.3425 | 65.7088 | 56.6701 | 62.1926 | 64.7727 | 140.8469 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
twieland/SCRATCH_ja-en_helsinki | twieland | 2022-06-11T23:01:52Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-11T01:05:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: SCRATCH_ja-en_helsinki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SCRATCH_ja-en_helsinki
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5583
- Otaku Benchmark VN BLEU: 19.12
- Otaku Benchmark LN BLEU: 11.55
- Otaku Benchmark MANGA BLEU: 12.98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 96
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 3.0252 | 0.02 | 2000 | 2.4140 |
| 2.8406 | 0.03 | 4000 | 2.2819 |
| 2.7505 | 0.05 | 6000 | 2.3018 |
| 2.6948 | 0.06 | 8000 | 2.1931 |
| 2.6408 | 0.08 | 10000 | 2.1724 |
| 2.6004 | 0.09 | 12000 | 2.1583 |
| 2.5685 | 0.11 | 14000 | 2.1203 |
| 2.5432 | 0.12 | 16000 | 2.1593 |
| 2.5153 | 0.14 | 18000 | 2.1009 |
| 2.4906 | 0.15 | 20000 | 2.0899 |
| 2.4709 | 0.17 | 22000 | 2.0512 |
| 2.4471 | 0.18 | 24000 | 2.0208 |
| 2.4295 | 0.2 | 26000 | 2.0773 |
| 2.4154 | 0.21 | 28000 | 2.0441 |
| 2.4008 | 0.23 | 30000 | 2.0235 |
| 2.3834 | 0.24 | 32000 | 2.0190 |
| 2.3709 | 0.26 | 34000 | 1.9831 |
| 2.3537 | 0.27 | 36000 | 1.9870 |
| 2.3486 | 0.29 | 38000 | 1.9692 |
| 2.3346 | 0.3 | 40000 | 1.9517 |
| 2.3195 | 0.32 | 42000 | 1.9800 |
| 2.3104 | 0.33 | 44000 | 1.9676 |
| 2.298 | 0.35 | 46000 | 1.9563 |
| 2.2905 | 0.36 | 48000 | 1.9217 |
| 2.2792 | 0.38 | 50000 | 1.9195 |
| 2.2714 | 0.39 | 52000 | 1.9109 |
| 2.2593 | 0.41 | 54000 | 1.9044 |
| 2.2582 | 0.42 | 56000 | 1.8876 |
| 2.2482 | 0.44 | 58000 | 1.8860 |
| 2.2394 | 0.45 | 60000 | 1.8887 |
| 2.2273 | 0.47 | 62000 | 1.8862 |
| 2.2255 | 0.48 | 64000 | 1.8705 |
| 2.2166 | 0.5 | 66000 | 1.8696 |
| 2.2075 | 0.51 | 68000 | 1.8657 |
| 2.1992 | 0.53 | 70000 | 1.8585 |
| 2.1969 | 0.54 | 72000 | 1.8526 |
| 2.1894 | 0.56 | 74000 | 1.8493 |
| 2.1817 | 0.57 | 76000 | 1.8480 |
| 2.1771 | 0.59 | 78000 | 1.8333 |
| 2.1683 | 0.6 | 80000 | 1.8342 |
| 2.1667 | 0.62 | 82000 | 1.8537 |
| 2.1546 | 0.63 | 84000 | 1.8261 |
| 2.1467 | 0.65 | 86000 | 1.8092 |
| 2.1421 | 0.66 | 88000 | 1.8137 |
| 2.1395 | 0.68 | 90000 | 1.8286 |
| 2.1313 | 0.69 | 92000 | 1.8042 |
| 2.1241 | 0.71 | 94000 | 1.7934 |
| 2.1214 | 0.72 | 96000 | 1.7940 |
| 2.12 | 0.74 | 98000 | 1.8064 |
| 2.1096 | 0.75 | 100000 | 1.7983 |
| 2.1035 | 0.77 | 102000 | 1.8089 |
| 2.0937 | 0.78 | 104000 | 1.7941 |
| 2.0893 | 0.8 | 106000 | 1.7791 |
| 2.0869 | 0.81 | 108000 | 1.7807 |
| 2.0845 | 0.83 | 110000 | 1.7852 |
| 2.0782 | 0.84 | 112000 | 1.7675 |
| 2.0755 | 0.86 | 114000 | 1.7756 |
| 2.0657 | 0.87 | 116000 | 1.7604 |
| 2.0614 | 0.89 | 118000 | 1.7447 |
| 2.0591 | 0.9 | 120000 | 1.7489 |
| 2.0586 | 0.92 | 122000 | 1.7550 |
| 2.0498 | 0.93 | 124000 | 1.7543 |
| 2.0455 | 0.95 | 126000 | 1.7510 |
| 2.04 | 0.96 | 128000 | 1.7439 |
| 2.0385 | 0.98 | 130000 | 1.7407 |
| 2.0267 | 0.99 | 132000 | 1.7467 |
| 2.0088 | 1.01 | 134000 | 1.7455 |
| 1.9826 | 1.02 | 136000 | 1.7210 |
| 1.9785 | 1.04 | 138000 | 1.7524 |
| 1.9777 | 1.05 | 140000 | 1.7272 |
| 1.9763 | 1.07 | 142000 | 1.7283 |
| 1.9736 | 1.08 | 144000 | 1.7210 |
| 1.9704 | 1.1 | 146000 | 1.7001 |
| 1.9625 | 1.11 | 148000 | 1.7112 |
| 1.9665 | 1.13 | 150000 | 1.7236 |
| 1.9592 | 1.14 | 152000 | 1.7169 |
| 1.9606 | 1.16 | 154000 | 1.6962 |
| 1.9571 | 1.17 | 156000 | 1.7064 |
| 1.9532 | 1.19 | 158000 | 1.6898 |
| 1.9465 | 1.2 | 160000 | 1.7004 |
| 1.9438 | 1.22 | 162000 | 1.7092 |
| 1.9435 | 1.23 | 164000 | 1.6927 |
| 1.9361 | 1.25 | 166000 | 1.6838 |
| 1.9369 | 1.26 | 168000 | 1.6784 |
| 1.9287 | 1.28 | 170000 | 1.6709 |
| 1.928 | 1.29 | 172000 | 1.6735 |
| 1.9227 | 1.31 | 174000 | 1.6689 |
| 1.9213 | 1.32 | 176000 | 1.6685 |
| 1.9152 | 1.34 | 178000 | 1.6635 |
| 1.9092 | 1.35 | 180000 | 1.6561 |
| 1.9059 | 1.37 | 182000 | 1.6673 |
| 1.9094 | 1.38 | 184000 | 1.6717 |
| 1.9006 | 1.4 | 186000 | 1.6593 |
| 1.8956 | 1.41 | 188000 | 1.6483 |
| 1.8972 | 1.43 | 190000 | 1.6635 |
| 1.8907 | 1.44 | 192000 | 1.6604 |
| 1.8885 | 1.46 | 194000 | 1.6465 |
| 1.8844 | 1.47 | 196000 | 1.6444 |
| 1.8799 | 1.49 | 198000 | 1.6307 |
| 1.8813 | 1.5 | 200000 | 1.6240 |
| 1.8693 | 1.52 | 202000 | 1.6102 |
| 1.8768 | 1.53 | 204000 | 1.6197 |
| 1.8678 | 1.55 | 206000 | 1.6275 |
| 1.8588 | 1.56 | 208000 | 1.6183 |
| 1.8585 | 1.58 | 210000 | 1.6197 |
| 1.8564 | 1.59 | 212000 | 1.6004 |
| 1.8493 | 1.61 | 214000 | 1.6078 |
| 1.85 | 1.62 | 216000 | 1.6001 |
| 1.8428 | 1.64 | 218000 | 1.6106 |
| 1.8428 | 1.65 | 220000 | 1.5866 |
| 1.8423 | 1.67 | 222000 | 1.5993 |
| 1.8352 | 1.68 | 224000 | 1.6052 |
| 1.8385 | 1.7 | 226000 | 1.5959 |
| 1.8307 | 1.71 | 228000 | 1.6024 |
| 1.8248 | 1.73 | 230000 | 1.5969 |
| 1.82 | 1.74 | 232000 | 1.5878 |
| 1.8254 | 1.76 | 234000 | 1.5934 |
| 1.8188 | 1.77 | 236000 | 1.5827 |
| 1.813 | 1.79 | 238000 | 1.5797 |
| 1.8128 | 1.8 | 240000 | 1.5758 |
| 1.8044 | 1.82 | 242000 | 1.5752 |
| 1.808 | 1.83 | 244000 | 1.5818 |
| 1.8025 | 1.85 | 246000 | 1.5772 |
| 1.7992 | 1.86 | 248000 | 1.5738 |
| 1.8021 | 1.88 | 250000 | 1.5752 |
| 1.7988 | 1.89 | 252000 | 1.5717 |
| 1.7967 | 1.91 | 254000 | 1.5690 |
| 1.7909 | 1.92 | 256000 | 1.5607 |
| 1.7942 | 1.94 | 258000 | 1.5618 |
| 1.7897 | 1.95 | 260000 | 1.5585 |
| 1.7871 | 1.97 | 262000 | 1.5576 |
| 1.7843 | 1.98 | 264000 | 1.5577 |
| 1.7888 | 2.0 | 266000 | 1.5583 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
meghazisofiane/opus-mt-en-ar-evaluated-en-to-ar-4000instances-opus-leaningRate2e-05-batchSize8-11-action-1 | meghazisofiane | 2022-06-11T21:50:40Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:opus100",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-11T21:33:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus100
metrics:
- bleu
model-index:
- name: opus-mt-en-ar-evaluated-en-to-ar-4000instances-opus-leaningRate2e-05-batchSize8-11-action-1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus100
type: opus100
args: ar-en
metrics:
- name: Bleu
type: bleu
value: 26.8232
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ar-evaluated-en-to-ar-4000instances-opus-leaningRate2e-05-batchSize8-11-action-1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the opus100 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1717
- Bleu: 26.8232
- Meteor: 0.172
- Gen Len: 12.1288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|
| 0.7364 | 0.25 | 100 | 0.1731 | 27.2753 | 0.1729 | 12.0887 |
| 0.2175 | 0.5 | 200 | 0.1731 | 27.2055 | 0.1722 | 11.5675 |
| 0.2193 | 0.75 | 300 | 0.1722 | 27.3277 | 0.1798 | 12.1325 |
| 0.2321 | 1.0 | 400 | 0.1750 | 27.5152 | 0.1762 | 11.925 |
| 0.1915 | 1.25 | 500 | 0.1690 | 27.5043 | 0.1751 | 11.9038 |
| 0.1794 | 1.5 | 600 | 0.1719 | 26.8607 | 0.1713 | 11.8138 |
| 0.1741 | 1.75 | 700 | 0.1725 | 26.974 | 0.1724 | 11.8462 |
| 0.1732 | 2.0 | 800 | 0.1717 | 26.8232 | 0.172 | 12.1288 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
lindeberg/distilbert-base-uncased-finetuned-cola | lindeberg | 2022-06-11T21:10:06Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-11T18:50:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.4496664370323995
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4949
- Matthews Correlation: 0.4497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5231 | 1.0 | 535 | 0.4949 | 0.4497 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
JClementC/test | JClementC | 2022-06-11T19:58:42Z | 0 | 0 | null | [
"region:us"
] | null | 2022-06-11T19:19:48Z | git lfs install
git clone https://github.com/nneonneo/2048-ai.git |
meln1k/qrdqn-SpaceInvadersNoFrameskip-v4 | meln1k | 2022-06-11T19:51:36Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-11T09:29:19Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: QRDQN
results:
- metrics:
- type: mean_reward
value: 2581.50 +/- 1151.96
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **QRDQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **QRDQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -orga meln1k -f logs/
python enjoy.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga meln1k
```
## Hyperparameters
```python
OrderedDict([('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_fraction', 0.025),
('frame_stack', 4),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('normalize', False)])
```
|
huggingtweets/conanobrien-mikemancini-wendymolyneux | huggingtweets | 2022-06-11T19:50:54Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-11T19:46:43Z | ---
language: en
thumbnail: http://www.huggingtweets.com/conanobrien-mikemancini-wendymolyneux/1654977049172/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1271404115042676736/PAIbmN-p_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/730612231021322240/Rl0_QYhL_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1044085580651528193/DR7QvrwG_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">mike mancini & Conan O'Brien & Wendy Molyneux</div>
<div style="text-align: center; font-size: 14px;">@conanobrien-mikemancini-wendymolyneux</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from mike mancini & Conan O'Brien & Wendy Molyneux.
| Data | mike mancini | Conan O'Brien | Wendy Molyneux |
| --- | --- | --- | --- |
| Tweets downloaded | 3150 | 3250 | 836 |
| Retweets | 286 | 40 | 251 |
| Short tweets | 290 | 24 | 69 |
| Tweets kept | 2574 | 3186 | 516 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/25wtfzk4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @conanobrien-mikemancini-wendymolyneux's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1hjizcue) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1hjizcue/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/conanobrien-mikemancini-wendymolyneux')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/mdoukmas | huggingtweets | 2022-06-11T19:35:54Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-11T19:34:24Z | ---
language: en
thumbnail: http://www.huggingtweets.com/mdoukmas/1654976150184/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1098660288193269762/n5v9daol_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Maya Dukmasova</div>
<div style="text-align: center; font-size: 14px;">@mdoukmas</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Maya Dukmasova.
| Data | Maya Dukmasova |
| --- | --- |
| Tweets downloaded | 3241 |
| Retweets | 896 |
| Short tweets | 158 |
| Tweets kept | 2187 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2jwhv7l5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mdoukmas's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/25v3pmsy) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/25v3pmsy/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mdoukmas')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
titi7242229/roberta-base-bne-finetuned_personality_multi_4 | titi7242229 | 2022-06-11T19:13:27Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-11T13:23:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned_personality_multi_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned_personality_multi_4
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1709
- Accuracy: 0.3470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1759 | 1.0 | 125 | 2.1873 | 0.2548 |
| 1.8651 | 2.0 | 250 | 2.2285 | 0.2680 |
| 1.8619 | 3.0 | 375 | 2.1732 | 0.2951 |
| 1.7224 | 4.0 | 500 | 2.0688 | 0.3925 |
| 1.6432 | 5.0 | 625 | 2.1094 | 0.3735 |
| 1.3599 | 6.0 | 750 | 2.1732 | 0.3631 |
| 1.0623 | 7.0 | 875 | 2.4785 | 0.3579 |
| 1.0504 | 8.0 | 1000 | 2.4598 | 0.3844 |
| 0.7662 | 9.0 | 1125 | 2.8081 | 0.3573 |
| 0.9167 | 10.0 | 1250 | 2.9385 | 0.3452 |
| 0.6391 | 11.0 | 1375 | 2.9933 | 0.3320 |
| 0.3893 | 12.0 | 1500 | 3.1037 | 0.3579 |
| 0.673 | 13.0 | 1625 | 3.4369 | 0.3631 |
| 0.3498 | 14.0 | 1750 | 3.6396 | 0.3383 |
| 0.3891 | 15.0 | 1875 | 3.8332 | 0.3556 |
| 0.0818 | 16.0 | 2000 | 3.9451 | 0.3401 |
| 0.1438 | 17.0 | 2125 | 3.9271 | 0.3458 |
| 0.0634 | 18.0 | 2250 | 4.1564 | 0.3481 |
| 0.0121 | 19.0 | 2375 | 4.1405 | 0.3499 |
| 0.0071 | 20.0 | 2500 | 4.1709 | 0.3470 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
aprischa/bart-large-cnn-aprischa | aprischa | 2022-06-11T17:21:57Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-11T16:53:31Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-aprischa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-aprischa
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3589
- Rouge1: 66.7098
- Rouge2: 57.7992
- Rougel: 63.2231
- Rougelsum: 65.9009
- Gen Len: 141.198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 0.369 | 1.0 | 5403 | 0.3835 | 66.0604 | 56.9948 | 62.4967 | 65.265 | 141.1126 |
| 0.2985 | 2.0 | 10806 | 0.3589 | 66.7098 | 57.7992 | 63.2231 | 65.9009 | 141.198 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
DancingIguana/codeparrot-ds | DancingIguana | 2022-06-11T16:58:04Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-08T21:56:49Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
bubblecookie/t5-small-finetuned-cnndm_trained | bubblecookie | 2022-06-11T16:48:45Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-10T06:21:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: t5-small-finetuned-cnndm_trained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm_trained
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
robingeibel/longformer-base-finetuned-big_patent | robingeibel | 2022-06-11T16:33:49Z | 62 | 1 | transformers | [
"transformers",
"tf",
"longformer",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-06-05T17:24:27Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: robingeibel/longformer-base-finetuned-big_patent
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# robingeibel/longformer-base-finetuned-big_patent
This model is a fine-tuned version of [robingeibel/longformer-base-finetuned-big_patent](https://huggingface.co/robingeibel/longformer-base-finetuned-big_patent) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1860
- Validation Loss: 1.0692
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 152946, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.1860 | 1.0692 | 0 |
### Framework versions
- Transformers 4.19.4
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
abdoutony207/m2m100_418M-evaluated-en-to-ar-2000instancesopus-leaningRate2e-05-batchSize16-20epoch-1 | abdoutony207 | 2022-06-11T16:26:19Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"dataset:opus100",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-11T15:56:17Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- opus100
metrics:
- bleu
model-index:
- name: m2m100_418M-evaluated-en-to-ar-2000instancesopus-leaningRate2e-05-batchSize16-20epoch-1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus100
type: opus100
args: ar-en
metrics:
- name: Bleu
type: bleu
value: 13.1835
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-evaluated-en-to-ar-2000instancesopus-leaningRate2e-05-batchSize16-20epoch-1
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the opus100 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3640
- Bleu: 13.1835
- Meteor: 0.1189
- Gen Len: 17.72
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|
| 6.1776 | 1.0 | 100 | 3.8904 | 10.5866 | 0.0995 | 16.64 |
| 2.4531 | 2.0 | 200 | 1.0928 | 12.3452 | 0.1108 | 17.0575 |
| 0.512 | 3.0 | 300 | 0.3625 | 10.5224 | 0.0982 | 17.2575 |
| 0.1924 | 4.0 | 400 | 0.3342 | 12.4242 | 0.1098 | 16.6325 |
| 0.1227 | 5.0 | 500 | 0.3403 | 13.0526 | 0.1185 | 17.3475 |
| 0.0889 | 6.0 | 600 | 0.3481 | 13.1323 | 0.1133 | 17.815 |
| 0.0651 | 7.0 | 700 | 0.3601 | 12.6684 | 0.1133 | 17.3525 |
| 0.0533 | 8.0 | 800 | 0.3640 | 13.1835 | 0.1189 | 17.72 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
neeenway/ppo-LunarLander-v2 | neeenway | 2022-06-11T13:43:31Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-11T13:43:03Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- metrics:
- type: mean_reward
value: 240.31 +/- 12.46
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Akshat/xlm-roberta-base-finetuned-panx-de | Akshat | 2022-06-11T13:35:25Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-11T12:19:48Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8611443210930829
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1405
- F1: 0.8611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2542 | 1.0 | 787 | 0.1788 | 0.8083 |
| 0.1307 | 2.0 | 1574 | 0.1371 | 0.8488 |
| 0.0784 | 3.0 | 2361 | 0.1405 | 0.8611 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
YeRyeongLee/albert-base-v2-finetuned-filtered-0609 | YeRyeongLee | 2022-06-11T13:33:02Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-11T11:46:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: albert-base-v2-finetuned-filtered-0609
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-filtered-0609
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2062
- Accuracy: 0.9723
- Precision: 0.9724
- Recall: 0.9723
- F1: 0.9723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2688 | 1.0 | 3180 | 0.2282 | 0.9560 | 0.9577 | 0.9560 | 0.9562 |
| 0.2268 | 2.0 | 6360 | 0.1909 | 0.9638 | 0.9640 | 0.9638 | 0.9638 |
| 0.1831 | 3.0 | 9540 | 0.2590 | 0.9572 | 0.9584 | 0.9572 | 0.9572 |
| 0.1588 | 4.0 | 12720 | 0.1752 | 0.9673 | 0.9678 | 0.9673 | 0.9673 |
| 0.0972 | 5.0 | 15900 | 0.1868 | 0.9695 | 0.9696 | 0.9695 | 0.9695 |
| 0.0854 | 6.0 | 19080 | 0.2042 | 0.9701 | 0.9707 | 0.9701 | 0.9702 |
| 0.0599 | 7.0 | 22260 | 0.1793 | 0.9748 | 0.9749 | 0.9748 | 0.9749 |
| 0.0389 | 8.0 | 25440 | 0.1996 | 0.9742 | 0.9743 | 0.9742 | 0.9742 |
| 0.0202 | 9.0 | 28620 | 0.2188 | 0.9723 | 0.9726 | 0.9723 | 0.9724 |
| 0.0152 | 10.0 | 31800 | 0.2062 | 0.9723 | 0.9724 | 0.9723 | 0.9723 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.1+cu111
- Datasets 1.16.1
- Tokenizers 0.12.1
|
marieke93/BERT-evidence-types | marieke93 | 2022-06-11T13:32:10Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-08T11:54:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BERT-evidence-types
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-evidence-types
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the evidence types dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8008
- Macro f1: 0.4227
- Weighted f1: 0.6976
- Accuracy: 0.7154
- Balanced accuracy: 0.3876
## Training and evaluation data
The data set, as well as the code that was used to fine tune this model can be found in the GitHub repository [BA-Thesis-Information-Science-Persuasion-Strategies](https://github.com/mariekevdh/BA-Thesis-Information-Science-Persuasion-Strategies)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro f1 | Weighted f1 | Accuracy | Balanced accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:-----------------:|
| 1.1148 | 1.0 | 125 | 1.0531 | 0.2566 | 0.6570 | 0.6705 | 0.2753 |
| 0.7546 | 2.0 | 250 | 0.9725 | 0.3424 | 0.6947 | 0.7002 | 0.3334 |
| 0.4757 | 3.0 | 375 | 1.1375 | 0.3727 | 0.7113 | 0.7184 | 0.3680 |
| 0.2637 | 4.0 | 500 | 1.3585 | 0.3807 | 0.6836 | 0.6910 | 0.3805 |
| 0.1408 | 5.0 | 625 | 1.6605 | 0.3785 | 0.6765 | 0.6872 | 0.3635 |
| 0.0856 | 6.0 | 750 | 1.9703 | 0.3802 | 0.6890 | 0.7047 | 0.3704 |
| 0.0502 | 7.0 | 875 | 2.1245 | 0.4067 | 0.6995 | 0.7169 | 0.3751 |
| 0.0265 | 8.0 | 1000 | 2.2676 | 0.3756 | 0.6816 | 0.6925 | 0.3647 |
| 0.0147 | 9.0 | 1125 | 2.4286 | 0.4052 | 0.6887 | 0.7062 | 0.3803 |
| 0.0124 | 10.0 | 1250 | 2.5773 | 0.4084 | 0.6853 | 0.7040 | 0.3695 |
| 0.0111 | 11.0 | 1375 | 2.5941 | 0.4146 | 0.6915 | 0.7085 | 0.3834 |
| 0.0076 | 12.0 | 1500 | 2.6124 | 0.4157 | 0.6936 | 0.7078 | 0.3863 |
| 0.0067 | 13.0 | 1625 | 2.7050 | 0.4139 | 0.6925 | 0.7108 | 0.3798 |
| 0.0087 | 14.0 | 1750 | 2.6695 | 0.4252 | 0.7009 | 0.7169 | 0.3920 |
| 0.0056 | 15.0 | 1875 | 2.7357 | 0.4257 | 0.6985 | 0.7161 | 0.3868 |
| 0.0054 | 16.0 | 2000 | 2.7389 | 0.4249 | 0.6955 | 0.7116 | 0.3890 |
| 0.0051 | 17.0 | 2125 | 2.7767 | 0.4197 | 0.6967 | 0.7146 | 0.3863 |
| 0.004 | 18.0 | 2250 | 2.7947 | 0.4211 | 0.6977 | 0.7154 | 0.3876 |
| 0.0041 | 19.0 | 2375 | 2.8030 | 0.4204 | 0.6953 | 0.7131 | 0.3855 |
| 0.0042 | 20.0 | 2500 | 2.8008 | 0.4227 | 0.6976 | 0.7154 | 0.3876 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
titi7242229/roberta-base-bne-finetuned_personality_multi_3 | titi7242229 | 2022-06-11T13:13:47Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-11T07:10:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned_personality_multi_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned_personality_multi_3
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1145
- Accuracy: 0.4847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2498 | 1.0 | 63 | 2.2799 | 0.2236 |
| 2.3044 | 2.0 | 126 | 2.1644 | 0.2980 |
| 1.9017 | 3.0 | 189 | 1.9934 | 0.4127 |
| 2.2281 | 4.0 | 252 | 1.8517 | 0.4501 |
| 1.2955 | 5.0 | 315 | 1.7588 | 0.4870 |
| 1.221 | 6.0 | 378 | 1.7269 | 0.4888 |
| 1.1381 | 7.0 | 441 | 1.7617 | 0.4888 |
| 0.8415 | 8.0 | 504 | 1.8101 | 0.4853 |
| 0.6696 | 9.0 | 567 | 1.8325 | 0.4928 |
| 0.6646 | 10.0 | 630 | 1.8707 | 0.4841 |
| 0.3758 | 11.0 | 693 | 1.8766 | 0.4876 |
| 0.3477 | 12.0 | 756 | 1.9171 | 0.4905 |
| 0.2854 | 13.0 | 819 | 1.9203 | 0.4980 |
| 0.2713 | 14.0 | 882 | 2.0089 | 0.4813 |
| 0.3434 | 15.0 | 945 | 2.0130 | 0.4905 |
| 0.0758 | 16.0 | 1008 | 2.0230 | 0.4922 |
| 0.2518 | 17.0 | 1071 | 2.0793 | 0.4824 |
| 0.0783 | 18.0 | 1134 | 2.0920 | 0.4830 |
| 0.0933 | 19.0 | 1197 | 2.1067 | 0.4836 |
| 0.184 | 20.0 | 1260 | 2.1145 | 0.4847 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
shivarama23/swin-tiny-patch4-window7-224-finetuned-image_quality | shivarama23 | 2022-06-11T11:54:49Z | 85 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-06-11T11:41:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-image_quality
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9090909090909091
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-image_quality
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5242
- Accuracy: 0.9091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.6762 | 0.6364 |
| No log | 2.0 | 2 | 0.6309 | 0.7273 |
| No log | 3.0 | 3 | 0.6095 | 0.6364 |
| No log | 4.0 | 4 | 0.5775 | 0.6364 |
| No log | 5.0 | 5 | 0.5443 | 0.8182 |
| No log | 6.0 | 6 | 0.5242 | 0.9091 |
| No log | 7.0 | 7 | 0.5149 | 0.8182 |
| No log | 8.0 | 8 | 0.5094 | 0.8182 |
| No log | 9.0 | 9 | 0.5038 | 0.8182 |
| 0.4095 | 10.0 | 10 | 0.4992 | 0.8182 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Jawaher/LIAR-fake-news-roberta-base | Jawaher | 2022-06-11T11:12:24Z | 103 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-06-11T05:40:13Z | A pre-trained Roberta masked language model (MLM) trained on around 12K fake news dataset called LIAR. The perplexity of the original pre-trained Roberta model on the dataset is 5.957 and the perplexity of the adapted model is 3.918. |
mmillet/distilrubert-tiny-2nd-finetune-epru | mmillet | 2022-06-11T09:50:42Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-11T09:48:50Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilrubert-tiny-2nd-finetune-epru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilrubert-tiny-2nd-finetune-epru
This model is a fine-tuned version of [mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented](https://huggingface.co/mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3546
- Accuracy: 0.9325
- F1: 0.9328
- Precision: 0.9359
- Recall: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0686 | 1.0 | 12 | 0.2931 | 0.9141 | 0.9142 | 0.9163 | 0.9141 |
| 0.0269 | 2.0 | 24 | 0.2690 | 0.9448 | 0.9444 | 0.9449 | 0.9448 |
| 0.0282 | 3.0 | 36 | 0.3140 | 0.9141 | 0.9140 | 0.9168 | 0.9141 |
| 0.0185 | 4.0 | 48 | 0.2977 | 0.9571 | 0.9570 | 0.9576 | 0.9571 |
| 0.0103 | 5.0 | 60 | 0.3368 | 0.9264 | 0.9265 | 0.9296 | 0.9264 |
| 0.0088 | 6.0 | 72 | 0.3067 | 0.9387 | 0.9385 | 0.9389 | 0.9387 |
| 0.0152 | 7.0 | 84 | 0.3660 | 0.9264 | 0.9263 | 0.9282 | 0.9264 |
| 0.0315 | 8.0 | 96 | 0.3793 | 0.9325 | 0.9328 | 0.9359 | 0.9325 |
| 0.0258 | 9.0 | 108 | 0.3546 | 0.9325 | 0.9328 | 0.9359 | 0.9325 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
OTQ/q-Taxi-v3 | OTQ | 2022-06-11T08:10:17Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-11T08:10:10Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.50 +/- 2.78
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
huggingtweets/gustholomulers | huggingtweets | 2022-06-11T07:53:54Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-11T07:50:54Z | ---
language: en
thumbnail: http://www.huggingtweets.com/gustholomulers/1654934015981/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1535477036353040384/tXI_s1Yi_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">soppy</div>
<div style="text-align: center; font-size: 14px;">@gustholomulers</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from soppy.
| Data | soppy |
| --- | --- |
| Tweets downloaded | 1482 |
| Retweets | 55 |
| Short tweets | 329 |
| Tweets kept | 1098 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1nhfbopf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gustholomulers's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3p5yu4wm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3p5yu4wm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gustholomulers')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AryaSuprana/BRATA_RoBERTaBali | AryaSuprana | 2022-06-11T05:01:40Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"ban",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-06-11T04:51:40Z | ---
language: "ban"
datasets:
- WikiBali
- Suara Saking Bali
widget:
- text: "Kalsium silih <mask> datu kimia antuk simbol Ca miwah wilangan atom 20."
example_title: "Conto 1"
- text: "Tabuan inggih <mask> silih tunggil soroh beburon sane madue kampid."
example_title: "Conto 2"
---
BRATA (Basa Bali Used for Pretraining RoBERTa) is a pretrained language model trained using Basa Bali or Balinese Language with RoBERTa-base-uncased configuration. The datasets used for this pretraining were collected by extracting WikiBali or Wikipedia Basa Bali and some sources from Suara Saking Bali website. The pretrained language model trained using Google Colab Pro with Tesla P100-PCIE-16GB GPU. Pretraining process used 200 epoch and 2 batch size. The smallest training loss can be seen in Training metrics or Metrics tab. |
tclong/wav2vec2-base-vios-commonvoice-1 | tclong | 2022-06-11T03:01:54Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-06-10T11:09:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-vios-commonvoice-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-vios-commonvoice-1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8913
- Wer: 0.3621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.4706 | 0.55 | 500 | 3.4725 | 1.0 |
| 3.202 | 1.1 | 1000 | 2.7555 | 1.0008 |
| 1.0507 | 1.66 | 1500 | 1.0481 | 0.6196 |
| 0.7325 | 2.21 | 2000 | 0.8120 | 0.4958 |
| 0.599 | 2.76 | 2500 | 0.7035 | 0.4447 |
| 0.5224 | 3.31 | 3000 | 0.6761 | 0.4078 |
| 0.4844 | 3.86 | 3500 | 0.6688 | 0.4011 |
| 0.4234 | 4.42 | 4000 | 0.6080 | 0.3729 |
| 0.4237 | 4.97 | 4500 | 0.5953 | 0.3556 |
| 0.3986 | 5.52 | 5000 | 0.6054 | 0.3478 |
| 0.3554 | 6.07 | 5500 | 0.6193 | 0.3479 |
| 0.3446 | 6.62 | 6000 | 0.5809 | 0.3302 |
| 0.3104 | 7.17 | 6500 | 0.5713 | 0.3283 |
| 0.3166 | 7.73 | 7000 | 0.5593 | 0.3133 |
| 0.2938 | 8.28 | 7500 | 0.5645 | 0.3081 |
| 0.3061 | 8.83 | 8000 | 0.5508 | 0.3020 |
| 0.2986 | 9.38 | 8500 | 0.5462 | 0.3024 |
| 0.2939 | 9.93 | 9000 | 0.5544 | 0.3028 |
| 0.2633 | 10.49 | 9500 | 0.5496 | 0.3024 |
| 0.2683 | 11.04 | 10000 | 0.5439 | 0.2946 |
| 0.2714 | 11.59 | 10500 | 0.5524 | 0.2947 |
| 0.2354 | 12.14 | 11000 | 0.5267 | 0.2918 |
| 0.2488 | 12.69 | 11500 | 0.5728 | 0.2938 |
| 0.2479 | 13.25 | 12000 | 0.5802 | 0.2951 |
| 0.245 | 13.8 | 12500 | 0.5571 | 0.2890 |
| 0.2422 | 14.35 | 13000 | 0.5531 | 0.2871 |
| 0.2369 | 14.9 | 13500 | 0.5453 | 0.2860 |
| 0.2345 | 15.45 | 14000 | 0.5452 | 0.2847 |
| 0.2507 | 16.0 | 14500 | 0.5536 | 0.2884 |
| 0.2454 | 16.56 | 15000 | 0.5577 | 0.2871 |
| 0.2729 | 17.11 | 15500 | 0.6019 | 0.2931 |
| 0.2743 | 17.66 | 16000 | 0.5619 | 0.2905 |
| 0.3031 | 18.21 | 16500 | 0.6401 | 0.3006 |
| 0.315 | 18.76 | 17000 | 0.6044 | 0.2990 |
| 0.4025 | 19.32 | 17500 | 0.6739 | 0.3304 |
| 0.4915 | 19.87 | 18000 | 0.7267 | 0.3472 |
| 0.5539 | 20.42 | 18500 | 0.8078 | 0.3483 |
| 0.7138 | 20.97 | 19000 | 0.9362 | 0.3765 |
| 0.5766 | 21.52 | 19500 | 0.7921 | 0.3392 |
| 0.688 | 22.08 | 20000 | 0.8833 | 0.3693 |
| 0.6964 | 22.63 | 20500 | 0.9137 | 0.3469 |
| 0.7389 | 23.18 | 21000 | 0.9379 | 0.3460 |
| 0.7851 | 23.73 | 21500 | 1.0438 | 0.3653 |
| 0.7619 | 24.28 | 22000 | 0.9313 | 0.3873 |
| 0.7175 | 24.83 | 22500 | 0.8668 | 0.3789 |
| 0.6842 | 25.39 | 23000 | 0.8243 | 0.3761 |
| 0.6941 | 25.94 | 23500 | 0.8557 | 0.3804 |
| 0.7167 | 26.49 | 24000 | 0.8618 | 0.3875 |
| 0.721 | 27.04 | 24500 | 0.8686 | 0.3764 |
| 0.6949 | 27.59 | 25000 | 0.8773 | 0.3690 |
| 0.727 | 28.15 | 25500 | 0.8769 | 0.3666 |
| 0.7363 | 28.7 | 26000 | 0.8867 | 0.3634 |
| 0.7157 | 29.25 | 26500 | 0.8895 | 0.3626 |
| 0.7385 | 29.8 | 27000 | 0.8913 | 0.3621 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/froliki2108 | huggingtweets | 2022-06-11T00:04:16Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-11T00:02:55Z | ---
language: en
thumbnail: http://www.huggingtweets.com/froliki2108/1654905851117/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1447692349493100549/1PV2c-PJ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Frolikiπππ</div>
<div style="text-align: center; font-size: 14px;">@froliki2108</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Frolikiπππ.
| Data | Frolikiπππ |
| --- | --- |
| Tweets downloaded | 2223 |
| Retweets | 1133 |
| Short tweets | 229 |
| Tweets kept | 861 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2tug3miv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @froliki2108's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3otsf5pj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3otsf5pj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/froliki2108')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nateraw/modelcard-creator-demo | nateraw | 2022-06-10T23:58:39Z | 0 | 0 | pytorch | [
"pytorch",
"modelcards",
"autogenerated-modelcard",
"en",
"dataset:beans",
"arxiv:1810.03993",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | null | 2022-06-10T23:40:23Z | ---
language:
- en
license: mit
library_name: pytorch
tags:
- modelcards
- autogenerated-modelcard
datasets:
- beans
metrics:
- accuracy
---
# modelcard-creator-demo
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Downstream Use](#downstream-use)
- [Misuse and Out of Scope Use](#misuse-and-out-of-scope-use)
- [Limitations and Biases](#limitations-and-biases)
- [Training](#training)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Evaluation Results](#evaluation-results)
- [Environmental Impact](#environmental-impact)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Model Details
<!-- Give an overview of your model, the relevant research paper, who trained it, etc. -->
This isn't really a model, it's just a test repo to see if the [model card creator](https://huggingface.co/spaces/nateraw/modelcard-creator) works!
- Developed by: Nathan Raw
- Language(s):
- License: modelcard-creator-demo is licensed under the mit license
- Resources for more information:
- [Research Paper](https://arxiv.org/pdf/1810.03993.pdf)
- [GitHub Repo](https://github.com/nateraw/modelcards)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
# A nice code snippet here that describes how to use the model...
```
## Uses
#### Direct Use
<!-- Describe what kind of tasks this model can be used for directly or problems it can solve. -->
[More Information Needed]
#### Downstream Use
<!-- Describe how this model could be leveraged by a downstream model (if applicable) -->
[More Information Needed]
#### Misuse and Out-of-scope Use
<!-- Describe ways in which this model ***should not*** be used. -->
[More Information Needed]
## Limitations and Biases
<!-- Describe limitations and biases of this model or models of it's type. -->
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
[More Information Needed]
## Training
#### Training Data
<!-- Describe the dataset used to train this model. -->
<!-- Refer to data card if dataset is provided and exists on the hub -->
See the data card for additional information.
#### Training Procedure
<!-- Describe the preprocessing, hardware used, training hyperparameters, etc. -->
[More Information Needed]
## Evaluation Results
<!-- Describe evaluation results of this model across any datasets it was evaluated on. -->
[More Information Needed]
## Environmental Impact
<!-- Provide information to document the environmental impact of this model -->
You can estimate carbon emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700)
- **Hardware Type:**
- **Hours used:**
- **Cloud Provider:**
- **Compute Region:**
- **Carbon Emitted:**
## Citation Information
```bibtex
@inproceedings{Mitchell_2019,
doi = {10.1145/3287560.3287596},
url = {https://doi.org/10.1145%2F3287560.3287596},
year = 2019,
month = {jan},
publisher = {{ACM}
},
author = {Margaret Mitchell and Simone Wu and Andrew Zaldivar and Parker Barnes and Lucy Vasserman and Ben Hutchinson and Elena Spitzer and Inioluwa Deborah Raji and Timnit Gebru},
title = {Model Cards for Model Reporting},
booktitle = {Proceedings of the Conference on Fairness, Accountability, and Transparency}
}
``` |
ahmeddbahaa/t5-arabic-base-finetuned-wikilingua-ar | ahmeddbahaa | 2022-06-10T23:54:52Z | 12 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"mt5",
"ar",
"abstractive summarization",
"generated_from_trainer",
"dataset:wiki_lingua",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2022-06-10T15:19:23Z | ---
license: apache-2.0
tags:
- summarization
- mt5
- ar
- abstractive summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: t5-arabic-base-finetuned-wikilingua-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-arabic-base-finetuned-wikilingua-ar
This model is a fine-tuned version of [bakrianoo/t5-arabic-base](https://huggingface.co/bakrianoo/t5-arabic-base) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2735
- Rouge-1: 20.72
- Rouge-2: 7.63
- Rouge-l: 18.75
- Gen Len: 18.74
- Bertscore: 70.79
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/jedwill1999 | huggingtweets | 2022-06-10T23:10:10Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-10T23:09:22Z | ---
language: en
thumbnail: http://www.huggingtweets.com/jedwill1999/1654902604867/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1510152678919135250/lfEmlEGJ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">a local</div>
<div style="text-align: center; font-size: 14px;">@jedwill1999</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from a local.
| Data | a local |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 1080 |
| Short tweets | 525 |
| Tweets kept | 1641 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1qsnsp6t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jedwill1999's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/mjjc73pu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/mjjc73pu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jedwill1999')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/boopysaur | huggingtweets | 2022-06-10T22:57:09Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-10T22:56:08Z | ---
language: en
thumbnail: http://www.huggingtweets.com/boopysaur/1654901824865/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1476816918879297559/2jt_Rt2L_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">boop β‘</div>
<div style="text-align: center; font-size: 14px;">@boopysaur</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from boop β‘.
| Data | boop β‘ |
| --- | --- |
| Tweets downloaded | 920 |
| Retweets | 162 |
| Short tweets | 128 |
| Tweets kept | 630 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/398l195g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @boopysaur's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3te0suw6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3te0suw6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/boopysaur')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
facebook/roberta-hate-speech-dynabench-r1-target | facebook | 2022-06-10T22:36:34Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"arxiv:2012.15761",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-10T21:32:03Z | ---
language: en
---
# LFTW R1 Target
The R1 Target model from [Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection](https://arxiv.org/abs/2012.15761)
## Citation Information
```bibtex
@inproceedings{vidgen2021lftw,
title={Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection},
author={Bertie Vidgen and Tristan Thrush and Zeerak Waseem and Douwe Kiela},
booktitle={ACL},
year={2021}
}
```
Thanks to Kushal Tirumala and Adina Williams for helping the authors put the model on the hub! |
facebook/roberta-hate-speech-dynabench-r2-target | facebook | 2022-06-10T22:36:17Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"arxiv:2012.15761",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-10T21:52:46Z | ---
language: en
---
# LFTW R2 Target
The R2 Target model from [Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection](https://arxiv.org/abs/2012.15761)
## Citation Information
```bibtex
@inproceedings{vidgen2021lftw,
title={Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection},
author={Bertie Vidgen and Tristan Thrush and Zeerak Waseem and Douwe Kiela},
booktitle={ACL},
year={2021}
}
```
Thanks to Kushal Tirumala and Adina Williams for helping the authors put the model on the hub! |
facebook/roberta-hate-speech-dynabench-r3-target | facebook | 2022-06-10T22:34:01Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"arxiv:2012.15761",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-10T22:10:40Z | ---
language: en
---
# LFTW R3 Target
The R3 Target model from [Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection](https://arxiv.org/abs/2012.15761)
## Citation Information
```bibtex
@inproceedings{vidgen2021lftw,
title={Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection},
author={Bertie Vidgen and Tristan Thrush and Zeerak Waseem and Douwe Kiela},
booktitle={ACL},
year={2021}
}
```
Thanks to Kushal Tirumala and Adina Williams for helping the authors put the model on the hub! |
luisrqe/cubucetapenis | luisrqe | 2022-06-10T21:08:15Z | 0 | 0 | null | [
"region:us"
] | null | 2022-06-10T20:52:33Z | git lfs install
https://www.novinhavideosporno.com/wp-content/uploads/2018/11/a-maior-buceta-do-mundo-e-a-mais-escrota-tambem.jpg
https://www.xvideos-tv.com/wp-content/uploads/2021/11/buceta-da-novinha-sendo-arrombada-por-varios-machos-272x180.jpg
http://cdn.xvideos-br.com/media/imagens/10501.jpg
https://upload.wikimedia.org/wikipedia/commons/thumb/a/ac/Sidoka_photoshoot.jpg/800px-Sidoka_photoshoot.jpg
https://rapforte.com/wp-content/uploads/2021/08/Doka.jpg
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcR2pWEwhp9tl7CDcHd7ELiKLpUPXkhCm4zmCwZGerHYh7CY8WxsGnOSACYussZdIF283so&usqp=CAU
git clone https://huggingface.co/luisrqe/cubucetapenis |
torli/trijki | torli | 2022-06-10T20:45:14Z | 0 | 1 | null | [
"license:artistic-2.0",
"region:us"
] | null | 2022-06-10T20:43:32Z | ---
license: artistic-2.0
---
git lfs install
git clone https://huggingface.co/torli/trijki |
FritzOS/TEdetection_distiBERT_NER_V5 | FritzOS | 2022-06-10T20:35:11Z | 63 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-10T20:34:58Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TEdetection_distiBERT_NER_V5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TEdetection_distiBERT_NER_V5
This model is a fine-tuned version of [FritzOS/TEdetection_distilBERT_mLM_V5](https://huggingface.co/FritzOS/TEdetection_distilBERT_mLM_V5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0029
- Validation Loss: 0.0032
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 208018, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0029 | 0.0032 | 0 |
### Framework versions
- Transformers 4.19.4
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented | mmillet | 2022-06-10T20:27:38Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-10T20:14:44Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented
This model is a fine-tuned version of [DeepPavlov/distilrubert-tiny-cased-conversational-v1](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5908
- Accuracy: 0.8653
- F1: 0.8656
- Precision: 0.8665
- Recall: 0.8653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.9172 | 1.0 | 69 | 0.5124 | 0.8246 | 0.8220 | 0.8271 | 0.8246 |
| 0.4709 | 2.0 | 138 | 0.4279 | 0.8528 | 0.8505 | 0.8588 | 0.8528 |
| 0.3194 | 3.0 | 207 | 0.3770 | 0.8737 | 0.8727 | 0.8740 | 0.8737 |
| 0.2459 | 4.0 | 276 | 0.3951 | 0.8685 | 0.8682 | 0.8692 | 0.8685 |
| 0.1824 | 5.0 | 345 | 0.4005 | 0.8831 | 0.8834 | 0.8841 | 0.8831 |
| 0.1515 | 6.0 | 414 | 0.4356 | 0.8800 | 0.8797 | 0.8801 | 0.8800 |
| 0.1274 | 7.0 | 483 | 0.4642 | 0.8727 | 0.8726 | 0.8731 | 0.8727 |
| 0.0833 | 8.0 | 552 | 0.5226 | 0.8633 | 0.8627 | 0.8631 | 0.8633 |
| 0.073 | 9.0 | 621 | 0.5327 | 0.8695 | 0.8686 | 0.8692 | 0.8695 |
| 0.0575 | 10.0 | 690 | 0.5908 | 0.8653 | 0.8656 | 0.8665 | 0.8653 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/smallmutuals | huggingtweets | 2022-06-10T19:13:07Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-10T18:33:00Z | ---
language: en
thumbnail: http://www.huggingtweets.com/smallmutuals/1654888348503/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1433527116948180999/wejtDhFm_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Cool Owl Guy</div>
<div style="text-align: center; font-size: 14px;">@smallmutuals</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Cool Owl Guy.
| Data | Cool Owl Guy |
| --- | --- |
| Tweets downloaded | 367 |
| Retweets | 45 |
| Short tweets | 25 |
| Tweets kept | 297 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/238iiiu5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @smallmutuals's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2hl8vi9y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2hl8vi9y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/smallmutuals')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
louisdeco/camembert-base-finetuned-LineCause | louisdeco | 2022-06-10T16:35:03Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-10T13:11:32Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
model-index:
- name: camembert-base-finetuned-LineCause
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-finetuned-LineCause
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Accuracy: 1.0
- F1: 1.0
- Recall: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 50
- eval_batch_size: 50
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:------:|
| 0.0428 | 1.0 | 4409 | 0.0002 | 1.0 | 1.0 | 1.0 |
| 0.0009 | 2.0 | 8818 | 0.0001 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
OTQ/q-FrozenLake-v1-4x4-noSlippery | OTQ | 2022-06-10T15:14:57Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-10T15:14:51Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
titi7242229/roberta-base-bne-finetuned_personality_multi | titi7242229 | 2022-06-10T14:19:54Z | 57 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-10T11:55:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned_personality_multi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned_personality_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3709
- Accuracy: 0.5130
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2576 | 1.0 | 125 | 2.2755 | 0.2340 |
| 2.0409 | 2.0 | 250 | 2.1425 | 0.2974 |
| 1.6358 | 3.0 | 375 | 1.8730 | 0.4403 |
| 1.3553 | 4.0 | 500 | 1.7443 | 0.5032 |
| 0.9201 | 5.0 | 625 | 1.7165 | 0.5055 |
| 0.5199 | 6.0 | 750 | 1.7476 | 0.5107 |
| 0.5588 | 7.0 | 875 | 1.7758 | 0.5153 |
| 0.2079 | 8.0 | 1000 | 1.7964 | 0.5251 |
| 0.2685 | 9.0 | 1125 | 1.8886 | 0.5187 |
| 0.1261 | 10.0 | 1250 | 1.9463 | 0.5199 |
| 0.1105 | 11.0 | 1375 | 2.0337 | 0.5222 |
| 0.1572 | 12.0 | 1500 | 2.1206 | 0.5084 |
| 0.0643 | 13.0 | 1625 | 2.1815 | 0.5182 |
| 0.0174 | 14.0 | 1750 | 2.2412 | 0.5176 |
| 0.0266 | 15.0 | 1875 | 2.2741 | 0.5112 |
| 0.0447 | 16.0 | 2000 | 2.3089 | 0.5159 |
| 0.02 | 17.0 | 2125 | 2.3401 | 0.5135 |
| 0.0414 | 18.0 | 2250 | 2.3504 | 0.5159 |
| 0.0122 | 19.0 | 2375 | 2.3661 | 0.5130 |
| 0.0154 | 20.0 | 2500 | 2.3709 | 0.5130 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
RalphX1/dqn-SpaceInvadersNoFrameskip-v4 | RalphX1 | 2022-06-10T13:57:03Z | 6 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-10T13:11:26Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 374.00 +/- 214.89
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RalphX1 -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga RalphX1
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
ahmeddbahaa/mt5-base-finetuned-wikilingua-ar | ahmeddbahaa | 2022-06-10T13:00:43Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"ar",
"abstractive summarization",
"generated_from_trainer",
"dataset:wiki_lingua",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-06-10T02:40:53Z | ---
license: apache-2.0
tags:
- summarization
- mt5
- ar
- abstractive summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: mt5-base-finetuned-wikilingua-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-wikilingua-ar
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4936
- Rouge-1: 20.79
- Rouge-2: 7.6
- Rouge-l: 18.81
- Gen Len: 18.73
- Bertscore: 70.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
adi1494/distilbert-base-uncased-finetuned-squad | adi1494 | 2022-06-10T12:39:00Z | 62 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-06-10T06:38:11Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: adi1494/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# adi1494/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5671
- Validation Loss: 1.2217
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5532, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.5671 | 1.2217 | 0 |
### Framework versions
- Transformers 4.19.3
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
becher/t5-small-finetuned-arxiv | becher | 2022-06-10T12:28:48Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-10T11:59:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-arxiv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-arxiv
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1559
- Rouge1: 37.854
- Rouge2: 20.4934
- Rougel: 33.9992
- Rougelsum: 33.9943
- Gen Len: 15.847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:-------:|:---------:|:-------:|
| 2.3848 | 1.0 | 3564 | 2.1559 | 37.854 | 20.4934 | 33.9992 | 33.9943 | 15.847 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
stig/distilbert-base-uncased-finetuned | stig | 2022-06-10T10:59:39Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-06-10T09:59:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0255 | 1.0 | 2312 | 1.9202 |
| 1.7483 | 2.0 | 4624 | 1.8437 |
| 1.5733 | 3.0 | 6936 | 1.8627 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
mmillet/distilrubert-2ndfinetune-epru | mmillet | 2022-06-10T10:52:26Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-10T10:49:55Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilrubert-2ndfinetune-epru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilrubert-2ndfinetune-epru
This model is a fine-tuned version of [mmillet/distilrubert-tiny-cased-conversational-v1_best_finetuned_emotion_experiment_augmented_anger_fear](https://huggingface.co/mmillet/distilrubert-tiny-cased-conversational-v1_best_finetuned_emotion_experiment_augmented_anger_fear) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3531
- Accuracy: 0.9054
- F1: 0.9034
- Precision: 0.9074
- Recall: 0.9054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.4716 | 1.0 | 11 | 0.2851 | 0.8986 | 0.8945 | 0.9029 | 0.8986 |
| 0.2842 | 2.0 | 22 | 0.3041 | 0.8851 | 0.8796 | 0.8816 | 0.8851 |
| 0.167 | 3.0 | 33 | 0.2996 | 0.8986 | 0.8914 | 0.8997 | 0.8986 |
| 0.1527 | 4.0 | 44 | 0.2443 | 0.9189 | 0.9163 | 0.9222 | 0.9189 |
| 0.0926 | 5.0 | 55 | 0.2777 | 0.9054 | 0.9016 | 0.9059 | 0.9054 |
| 0.0897 | 6.0 | 66 | 0.3081 | 0.9122 | 0.9080 | 0.9147 | 0.9122 |
| 0.0438 | 7.0 | 77 | 0.3332 | 0.8986 | 0.8952 | 0.8993 | 0.8986 |
| 0.0433 | 8.0 | 88 | 0.3480 | 0.8851 | 0.8859 | 0.8896 | 0.8851 |
| 0.0398 | 9.0 | 99 | 0.3531 | 0.9054 | 0.9034 | 0.9074 | 0.9054 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
shivigupta/dqn-SpaceInvadersNoFrameskip-v4 | shivigupta | 2022-06-10T10:11:07Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-10T10:10:35Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 374.00 +/- 214.89
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga shivigupta -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga shivigupta
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
YaYaB/SpaceInvadersNoFrameskip-v4-2 | YaYaB | 2022-06-10T09:16:18Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-10T09:15:44Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 556.00 +/- 162.23
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga YaYaB -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga YaYaB
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
TurkuNLP/bert-large-finnish-cased-v1 | TurkuNLP | 2022-06-10T08:46:17Z | 152 | 2 | transformers | [
"transformers",
"pytorch",
"fi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-06-10T07:53:16Z | ---
license: apache-2.0
language: fi
---
This is the large variant of FinBERT (TurkuNLP/bert-base-finnish-cased-v1). The training data is exactly the same. |
huggingtweets/drilbot_neo | huggingtweets | 2022-06-10T08:39:44Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1374924360780242944/-Q8NfgEr_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wintbot_neo</div>
<div style="text-align: center; font-size: 14px;">@drilbot_neo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wintbot_neo.
| Data | wintbot_neo |
| --- | --- |
| Tweets downloaded | 3243 |
| Retweets | 373 |
| Short tweets | 468 |
| Tweets kept | 2402 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/25adu2w7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @drilbot_neo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3keot8ku) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3keot8ku/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/drilbot_neo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
flood/distilbert-base-uncased-distilled-clinc | flood | 2022-06-10T08:03:08Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-10T07:59:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9309677419354838
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0389
- Accuracy: 0.9310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6206 | 1.0 | 318 | 0.3251 | 0.6610 |
| 0.2571 | 2.0 | 636 | 0.1366 | 0.8584 |
| 0.1392 | 3.0 | 954 | 0.0813 | 0.9081 |
| 0.0967 | 4.0 | 1272 | 0.0598 | 0.9152 |
| 0.0779 | 5.0 | 1590 | 0.0503 | 0.9229 |
| 0.0675 | 6.0 | 1908 | 0.0451 | 0.9271 |
| 0.0615 | 7.0 | 2226 | 0.0425 | 0.9326 |
| 0.058 | 8.0 | 2544 | 0.0403 | 0.9316 |
| 0.0557 | 9.0 | 2862 | 0.0393 | 0.9306 |
| 0.0544 | 10.0 | 3180 | 0.0389 | 0.9310 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Intel/MiniLM-L12-H384-uncased-mrpc | Intel | 2022-06-10T07:06:45Z | 220 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-10T06:55:25Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: MiniLM-L12-H384-uncased-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.875
- name: F1
type: f1
value: 0.9097345132743363
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLM-L12-H384-uncased-mrpc
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4319
- Accuracy: 0.875
- F1: 0.9097
- Combined Score: 0.8924
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
jayeshgar/dqn-SpaceInvadersNoFrameskip-v4 | jayeshgar | 2022-06-10T06:54:27Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-10T06:53:42Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 653.00 +/- 114.70
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jayeshgar -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jayeshgar
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
ritheshSree/animal-classifier | ritheshSree | 2022-06-10T05:38:54Z | 115 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-06-10T05:21:44Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: animal-classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# animal-classifier
Autogenerated by HuggingPicsπ€πΌοΈ
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cat

#### dog

#### snake

#### tiger
 |
RuiqianLi/wav2vec2-xls-r-300m_Mrbrown_finetune1 | RuiqianLi | 2022-06-10T03:17:06Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:uob_singlish",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-06-09T10:16:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- uob_singlish
model-index:
- name: wav2vec2-xls-r-300m_Mrbrown_finetune1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m_Mrbrown_finetune1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the uob_singlish dataset.
## This time use self-made dataset(cut the audio of "https://www.youtube.com/watch?v=a2ZOTD3R7JI" into slices and write the corresponding transcript, totally 4 mins), don't know why the word-error-rate keep 1. But can know that much be the problem of dataset, because last time use the same pre-trained model and standard singlish corpus fine-tune get nice result. (can find it at:RuiqianLi/wav2vec2-large-xls-r-300m-singlish-colab)
It achieves the following results on the evaluation set:
- Loss: 3.0927
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.7943 | 20.0 | 200 | 3.0597 | 1.0 |
| 2.9902 | 40.0 | 400 | 3.1604 | 1.0 |
| 2.9696 | 60.0 | 600 | 3.1112 | 1.0 |
| 2.8885 | 80.0 | 800 | 3.0234 | 1.0 |
| 2.8154 | 100.0 | 1000 | 3.0927 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
alibaba-pai/pai-bert-tiny-zh | alibaba-pai | 2022-06-10T02:34:43Z | 272 | 6 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"zh",
"arxiv:2205.00258",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-06-09T03:45:15Z | ---
language: zh
pipeline_tag: fill-mask
widget:
- text: "δΈε½ηι¦ι½ζ―ε[MASK]γ"
- text: "ηε₯Άζ―[MASK]θ²ηγ"
tags:
- bert
license: apache-2.0
---
## Alibaba PAI BERT Tiny Chinese
This project provides Chinese pre-trained language models and various types of NLP tools. The models are pre-trained on the large-scale corpora hosted by the Alibaba PAI team. It is developed based on the EasyNLP framework (https://github.com/alibaba/EasyNLP).
## Citation
If you find the resource is useful, please cite the following paper in your work:
```
@article{easynlp,
title = {EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing},
publisher = {arXiv},
author = {Wang, Chengyu and Qiu, Minghui and Zhang, Taolin and Liu, Tingting and Li, Lei and Wang, Jianing and Wang, Ming and Huang, Jun and Lin, Wei},
url = {https://arxiv.org/abs/2205.00258},
year = {2022}
}
``` |
YeRyeongLee/bert-base-cased-finetuned-filtered-0609 | YeRyeongLee | 2022-06-10T02:29:16Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-10T00:30:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-cased-finetuned-filtered-0609
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-filtered-0609
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2410
- Accuracy: 0.9748
- Precision: 0.9751
- Recall: 0.9748
- F1: 0.9749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2028 | 1.0 | 3180 | 0.2405 | 0.9535 | 0.9561 | 0.9535 | 0.9538 |
| 0.1632 | 2.0 | 6360 | 0.1686 | 0.9660 | 0.9664 | 0.9660 | 0.9661 |
| 0.1203 | 3.0 | 9540 | 0.1625 | 0.9648 | 0.9655 | 0.9648 | 0.9648 |
| 0.1233 | 4.0 | 12720 | 0.1510 | 0.9698 | 0.9702 | 0.9698 | 0.9699 |
| 0.0823 | 5.0 | 15900 | 0.1600 | 0.9730 | 0.9732 | 0.9730 | 0.9730 |
| 0.0453 | 6.0 | 19080 | 0.1953 | 0.9723 | 0.9724 | 0.9723 | 0.9723 |
| 0.031 | 7.0 | 22260 | 0.1754 | 0.9755 | 0.9755 | 0.9755 | 0.9755 |
| 0.0166 | 8.0 | 25440 | 0.2155 | 0.9739 | 0.9740 | 0.9739 | 0.9739 |
| 0.0036 | 9.0 | 28620 | 0.2519 | 0.9730 | 0.9733 | 0.9730 | 0.9730 |
| 0.0035 | 10.0 | 31800 | 0.2410 | 0.9748 | 0.9751 | 0.9748 | 0.9749 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.1+cu111
- Datasets 1.16.1
- Tokenizers 0.12.1
|
huggingtweets/loganpaul | huggingtweets | 2022-06-10T02:29:07Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-10T02:27:26Z | ---
language: en
thumbnail: http://www.huggingtweets.com/loganpaul/1654828143127/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1401837042934468611/okzqIoMb_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Logan Paul</div>
<div style="text-align: center; font-size: 14px;">@loganpaul</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Logan Paul.
| Data | Logan Paul |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 170 |
| Short tweets | 318 |
| Tweets kept | 2757 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wj9pph5f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @loganpaul's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1sqzuxgo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1sqzuxgo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/loganpaul')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/wickdedaccount | huggingtweets | 2022-06-10T02:20:32Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-10T02:17:51Z | ---
language: en
thumbnail: http://www.huggingtweets.com/wickdedaccount/1654827628283/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1353151127026597889/Yarj5Kfr_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">pp</div>
<div style="text-align: center; font-size: 14px;">@wickdedaccount</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from pp.
| Data | pp |
| --- | --- |
| Tweets downloaded | 1028 |
| Retweets | 822 |
| Short tweets | 119 |
| Tweets kept | 87 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1of8kmw1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wickdedaccount's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2q4m95l8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2q4m95l8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wickdedaccount')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Wikram/Legal-key-to-text | Wikram | 2022-06-10T02:17:44Z | 5 | 2 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-10T01:44:21Z | Task:
Given a set of input keywords, generate a corresponding text output for a section in the legal domain.
Dataset:
We used the Contract Understanding Atticus Dataset (CUAD).
It is a corpus of 13,000+ labels in 510 commercial legal contracts.
They have been manually labeled under the supervision of experienced lawyers to identify 41 types of legal clauses (e.g. licenses, warranty, governing law, insurance, etcβ¦).
Workflow:

You can connect me at [email protected] |
25khattab/vit_test_1_95 | 25khattab | 2022-06-10T01:40:54Z | 55 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-06-10T01:40:38Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: vit_test_1_95
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9501661062240601
---
# vit_test_1_95
Autogenerated by HuggingPicsπ€πΌοΈ
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images |
huggingtweets/artificialbuttr | huggingtweets | 2022-06-10T01:39:43Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-10T01:37:50Z | ---
language: en
thumbnail: http://www.huggingtweets.com/artificialbuttr/1654825134207/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1485413658351968256/NUVesGCM_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">artificialbutter</div>
<div style="text-align: center; font-size: 14px;">@artificialbuttr</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from artificialbutter.
| Data | artificialbutter |
| --- | --- |
| Tweets downloaded | 785 |
| Retweets | 129 |
| Short tweets | 407 |
| Tweets kept | 249 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ypylns0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @artificialbuttr's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1phf128l) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1phf128l/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/artificialbuttr')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
HrayrM/distilbert-base-uncased-finetuned-clinc | HrayrM | 2022-06-10T01:17:59Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-10T00:50:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9135483870967742
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7771
- Accuracy: 0.9135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2843 | 1.0 | 318 | 3.2793 | 0.7448 |
| 2.6208 | 2.0 | 636 | 1.8750 | 0.8297 |
| 1.5453 | 3.0 | 954 | 1.1565 | 0.8919 |
| 1.0141 | 4.0 | 1272 | 0.8628 | 0.9090 |
| 0.795 | 5.0 | 1590 | 0.7771 | 0.9135 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0
- Datasets 2.2.2
- Tokenizers 0.10.3
|
ExusAI/SRWNN | ExusAI | 2022-06-10T00:54:14Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2022-06-10T00:45:58Z | ---
license: mit
---
Super resolution model for anime and illustrations based on vgg11 and waifu2x. This model was trained on around 10k high resolution images (at least HD)
https://github.com/Exusai/SuperResolutionWaifuNN |
nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_7.csv___topic_text_google_mt5_base | nestoralvaro | 2022-06-10T00:52:35Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-09T23:49:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_7.csv___topic_text_google_mt5_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_7.csv___topic_text_google_mt5_base
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 2.8146
- Rouge2: 0.6707
- Rougel: 2.8187
- Rougelsum: 2.8098
- Gen Len: 6.4901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 3869 | nan | 2.8146 | 0.6707 | 2.8187 | 2.8098 | 6.4901 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
kjunelee/distilbert-base-uncased-finetuned-emotion | kjunelee | 2022-06-10T00:24:32Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-10T00:03:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.931
- name: F1
type: f1
value: 0.9313235272564213
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1595
- Accuracy: 0.931
- F1: 0.9313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.1873 | 0.924 | 0.9234 |
| 0.1992 | 2.0 | 250 | 0.1649 | 0.929 | 0.9293 |
| 0.1992 | 3.0 | 375 | 0.1595 | 0.931 | 0.9313 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
ajtamayoh/NLP-CIC-WFU_Clinical_Cases_NER_Paragraph_Tokenized_mBERT_cased_fine_tuned | ajtamayoh | 2022-06-09T23:31:56Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-09T23:02:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NLP-CIC-WFU_Clinical_Cases_NER_Paragraph_Tokenized_mBERT_cased_fine_tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-CIC-WFU_Clinical_Cases_NER_Paragraph_Tokenized_mBERT_cased_fine_tuned
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0537
- Precision: 0.8585
- Recall: 0.7101
- F1: 0.7773
- Accuracy: 0.9893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0693 | 1.0 | 514 | 0.0416 | 0.9485 | 0.6492 | 0.7708 | 0.9884 |
| 0.0367 | 2.0 | 1028 | 0.0396 | 0.9391 | 0.6710 | 0.7827 | 0.9892 |
| 0.0283 | 3.0 | 1542 | 0.0385 | 0.9388 | 0.6889 | 0.7947 | 0.9899 |
| 0.0222 | 4.0 | 2056 | 0.0422 | 0.9456 | 0.6790 | 0.7904 | 0.9898 |
| 0.0182 | 5.0 | 2570 | 0.0457 | 0.9349 | 0.6925 | 0.7956 | 0.9901 |
| 0.013 | 6.0 | 3084 | 0.0484 | 0.8947 | 0.7062 | 0.7894 | 0.9899 |
| 0.0084 | 7.0 | 3598 | 0.0537 | 0.8585 | 0.7101 | 0.7773 | 0.9893 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
pm390/dqn-SpaceInvadersNoFrameskip-v4 | pm390 | 2022-06-09T22:03:09Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-09T22:02:36Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 374.00 +/- 214.89
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga pm390 -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga pm390
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('max_grad_norm', 6),
('n_timesteps', 100000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
nthakur/contriever-base-msmarco | nthakur | 2022-06-09T22:01:51Z | 1,072 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-06-09T21:50:15Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# nthakur/contriever-base-msmarco
This is a port of the [Contriever MSMARCO Model](https://huggingface.co/facebook/contriever-msmarco) to [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('nthakur/contriever-base-msmarco')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('nthakur/contriever-base-msmarco')
model = AutoModel.from_pretrained('nthakur/contriever-base-msmarco')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=nthakur/contriever-base-msmarco)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 509, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
Have a look at: [Contriever Model](https://github.com/facebookresearch/contriever).
<!--- Describe where people can find more information --> |
kabelomalapane/En-Ts | kabelomalapane | 2022-06-09T17:33:20Z | 69 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-06-09T16:33:13Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: En-Ts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# En-Ts
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ts](https://huggingface.co/Helsinki-NLP/opus-mt-en-ts) on the None dataset.
It achieves the following results on the evaluation set:
Before training:
- Loss: 3.17
- Bleu: 14.513
After Training
- Loss: 1.3320
- Bleu: 36.7687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 1.7082 | 1.0 | 5929 | 1.6902 | 32.1311 |
| 1.4606 | 2.0 | 11858 | 1.4996 | 34.1129 |
| 1.3182 | 3.0 | 17787 | 1.4107 | 35.7428 |
| 1.2543 | 4.0 | 23716 | 1.3631 | 36.2009 |
| 1.2116 | 5.0 | 29645 | 1.3389 | 36.5876 |
| 1.1723 | 6.0 | 35574 | 1.3320 | 36.7481 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
tclong/wav2vec2-base-vios-commonvoice | tclong | 2022-06-09T17:17:08Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-06-08T18:03:39Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-vios-commonvoice
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-vios-commonvoice
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3823
- Wer: 0.2401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.2268 | 0.66 | 500 | 0.8746 | 0.5939 |
| 0.8728 | 1.32 | 1000 | 0.6435 | 0.4554 |
| 0.6899 | 1.99 | 1500 | 0.5655 | 0.3995 |
| 0.5842 | 2.65 | 2000 | 0.5267 | 0.3694 |
| 0.5371 | 3.31 | 2500 | 0.4980 | 0.3431 |
| 0.4921 | 3.97 | 3000 | 0.4781 | 0.3276 |
| 0.4508 | 4.64 | 3500 | 0.4434 | 0.3134 |
| 0.433 | 5.3 | 4000 | 0.4348 | 0.2963 |
| 0.404 | 5.96 | 4500 | 0.4248 | 0.2874 |
| 0.3834 | 6.62 | 5000 | 0.4163 | 0.2775 |
| 0.3784 | 7.28 | 5500 | 0.4104 | 0.2751 |
| 0.3669 | 7.95 | 6000 | 0.4143 | 0.2724 |
| 0.3462 | 8.61 | 6500 | 0.4131 | 0.2699 |
| 0.3364 | 9.27 | 7000 | 0.4070 | 0.2617 |
| 0.3249 | 9.93 | 7500 | 0.4076 | 0.2603 |
| 0.3154 | 10.6 | 8000 | 0.3998 | 0.2577 |
| 0.3117 | 11.26 | 8500 | 0.3930 | 0.2505 |
| 0.3101 | 11.92 | 9000 | 0.4003 | 0.2492 |
| 0.298 | 12.58 | 9500 | 0.3960 | 0.2496 |
| 0.2968 | 13.24 | 10000 | 0.3877 | 0.2469 |
| 0.29 | 13.91 | 10500 | 0.3870 | 0.2456 |
| 0.2921 | 14.57 | 11000 | 0.3823 | 0.2401 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ajtamayoh/NLP-CIC-WFU_Clinical_Cases_NER_Sents_Tokenized_bertin_roberta_base_spanish_fine_tuned | ajtamayoh | 2022-06-09T17:15:48Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-09T16:33:08Z | ---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NLP-CIC-WFU_Clinical_Cases_NER_Sents_Tokenized_bertin_roberta_base_spanish_fine_tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-CIC-WFU_Clinical_Cases_NER_Sents_Tokenized_bertin_roberta_base_spanish_fine_tuned
This model is a fine-tuned version of [bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0973
- Precision: 0.9012
- Recall: 0.6942
- F1: 0.7842
- Accuracy: 0.9857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0605 | 1.0 | 2568 | 0.0625 | 0.9400 | 0.6322 | 0.7560 | 0.9836 |
| 0.0475 | 2.0 | 5136 | 0.0622 | 0.9533 | 0.6572 | 0.7781 | 0.9849 |
| 0.0374 | 3.0 | 7704 | 0.0552 | 0.9261 | 0.6784 | 0.7831 | 0.9855 |
| 0.0246 | 4.0 | 10272 | 0.0693 | 0.9381 | 0.6658 | 0.7788 | 0.9849 |
| 0.0126 | 5.0 | 12840 | 0.0974 | 0.8918 | 0.6830 | 0.7735 | 0.9849 |
| 0.0061 | 6.0 | 15408 | 0.0886 | 0.8771 | 0.7099 | 0.7847 | 0.9850 |
| 0.0031 | 7.0 | 17976 | 0.0973 | 0.9012 | 0.6942 | 0.7842 | 0.9857 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
XGBooster/dqn-SpaceInvadersNoFrameskip-v4 | XGBooster | 2022-06-09T16:03:42Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-09T16:03:00Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 744.00 +/- 231.20
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga XGBooster -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga XGBooster
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Subsets and Splits