modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-04 18:27:18
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 468
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-04 18:26:45
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
lakshaywadhwa1993/ner_hindi_bert | lakshaywadhwa1993 | 2022-08-01T09:14:58Z | 8 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-08-01T09:05:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikiann
model-index:
- name: ner_hindi_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner_hindi_bert
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3713
- Overall Precision: 0.8942
- Overall Recall: 0.8972
- Overall F1: 0.8957
- Overall Accuracy: 0.9367
- Loc F1: 0.8766
- Org F1: 0.8489
- Per F1: 0.9454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Loc F1 | Org F1 | Per F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:------:|:------:|:------:|
| 0.2993 | 3.19 | 1000 | 0.3230 | 0.8779 | 0.8786 | 0.8782 | 0.9244 | 0.8535 | 0.8270 | 0.9358 |
| 0.0641 | 6.39 | 2000 | 0.3713 | 0.8942 | 0.8972 | 0.8957 | 0.9367 | 0.8766 | 0.8489 | 0.9454 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BekirTaha/ppo-LunarLander-v2 | BekirTaha | 2022-08-01T07:53:28Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"deep-reinforcement-learning",
"reinforcement-learning",
"region:us"
] | reinforcement-learning | 2022-08-01T06:40:27Z | ---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
---
# "Beyko7/ppo-LunarLander-v2"
This is a pre-trained model of a PPO agent playing LunarLander-v2 using the [stable-baselines3](https://github.com/DLR-RM/stable-baselines3) library.
### Usage (with Stable-baselines3)
Using this model becomes easy when you have stable-baselines3 and huggingface_sb3 installed:
```
pip install stable-baselines3
pip install huggingface_sb3
```
Then, you can use the model like this:
```python
import gym
from huggingface_sb3 import load_from_hub
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
# Retrieve the model from the hub
## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name})
## filename = name of the model zip file from the repository
checkpoint = load_from_hub(repo_id="Beyko7/ppo-LunarLander-v2", filename="LunarLander-v2.zip")
model = PPO.load(checkpoint)
# Evaluate the agent
eval_env = gym.make('LunarLander-v2')
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
# Watch the agent play
obs = env.reset()
for i in range(1000):
action, _state = model.predict(obs)
obs, reward, done, info = env.step(action)
env.render()
if done:
obs = env.reset()
env.close()
```
### Evaluation Results
Mean_reward: 248.30 +/- 23.32882124373712
---
|
huggingtweets/kantegory | huggingtweets | 2022-08-01T07:26:39Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-08-01T07:26:04Z | ---
language: en
thumbnail: http://www.huggingtweets.com/kantegory/1659338795219/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1122432883036172288/mYZ4acNy_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">David Dobryakov</div>
<div style="text-align: center; font-size: 14px;">@kantegory</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from David Dobryakov.
| Data | David Dobryakov |
| --- | --- |
| Tweets downloaded | 3017 |
| Retweets | 90 |
| Short tweets | 256 |
| Tweets kept | 2671 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1g9yc7mp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kantegory's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2aeg6rk1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2aeg6rk1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kantegory')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
keithanpai/dit-base-finetuned-rvlcdip-finetuned-eurosat | keithanpai | 2022-08-01T05:43:34Z | 58 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-08-01T04:30:41Z | ---
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: dit-base-finetuned-rvlcdip-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7315369261477046
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base-finetuned-rvlcdip-finetuned-eurosat
This model is a fine-tuned version of [microsoft/dit-base-finetuned-rvlcdip](https://huggingface.co/microsoft/dit-base-finetuned-rvlcdip) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7997
- Accuracy: 0.7315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9844 | 0.99 | 70 | 0.9493 | 0.6647 |
| 0.8775 | 1.99 | 140 | 0.8594 | 0.7016 |
| 0.8192 | 2.99 | 210 | 0.7997 | 0.7315 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
reachrkr/Cartpole-v1 | reachrkr | 2022-08-01T02:16:58Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2022-08-01T02:16:50Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Cartpole-v1
results:
- metrics:
- type: mean_reward
value: 40.00 +/- 18.57
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
notmaineyy/bert-base-multilingual-cased-finetuned-ner | notmaineyy | 2022-08-01T01:37:57Z | 5 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-07-21T01:33:49Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: notmaineyy/bert-base-multilingual-cased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# notmaineyy/bert-base-multilingual-cased-finetuned-ner
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0248
- Validation Loss: 0.0568
- Train Precision: 0.9424
- Train Recall: 0.9471
- Train F1: 0.9448
- Train Accuracy: 0.9863
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 10530, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.1335 | 0.0705 | 0.9152 | 0.9204 | 0.9178 | 0.9806 | 0 |
| 0.0497 | 0.0562 | 0.9335 | 0.9472 | 0.9403 | 0.9851 | 1 |
| 0.0248 | 0.0568 | 0.9424 | 0.9471 | 0.9448 | 0.9863 | 2 |
### Framework versions
- Transformers 4.21.0
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
huggingtweets/ravikiranprao | huggingtweets | 2022-08-01T01:17:35Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-08-01T01:16:52Z | ---
language: en
thumbnail: http://www.huggingtweets.com/ravikiranprao/1659316650453/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1071329495565529088/yyYoLPjy_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ravikiran P Rao</div>
<div style="text-align: center; font-size: 14px;">@ravikiranprao</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ravikiran P Rao.
| Data | Ravikiran P Rao |
| --- | --- |
| Tweets downloaded | 208 |
| Retweets | 66 |
| Short tweets | 16 |
| Tweets kept | 126 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3fw3xel4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ravikiranprao's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1g3m6mb3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1g3m6mb3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ravikiranprao')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
elopezlopez/distilbert-base-uncased_fold_5_ternary | elopezlopez | 2022-08-01T00:27:39Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-08-01T00:10:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_5_ternary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_5_ternary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8096
- F1: 0.7352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 291 | 0.6322 | 0.6742 |
| 0.5533 | 2.0 | 582 | 0.5861 | 0.7285 |
| 0.5533 | 3.0 | 873 | 0.6893 | 0.7117 |
| 0.2576 | 4.0 | 1164 | 1.0393 | 0.7124 |
| 0.2576 | 5.0 | 1455 | 1.1506 | 0.6988 |
| 0.1097 | 6.0 | 1746 | 1.3005 | 0.7166 |
| 0.0487 | 7.0 | 2037 | 1.5242 | 0.7124 |
| 0.0487 | 8.0 | 2328 | 1.5705 | 0.7010 |
| 0.0253 | 9.0 | 2619 | 1.5180 | 0.7194 |
| 0.0253 | 10.0 | 2910 | 1.6251 | 0.7062 |
| 0.022 | 11.0 | 3201 | 1.6299 | 0.7169 |
| 0.022 | 12.0 | 3492 | 1.7322 | 0.7091 |
| 0.0065 | 13.0 | 3783 | 1.8441 | 0.7044 |
| 0.0093 | 14.0 | 4074 | 1.9063 | 0.7097 |
| 0.0093 | 15.0 | 4365 | 1.8096 | 0.7352 |
| 0.0037 | 16.0 | 4656 | 1.8589 | 0.7321 |
| 0.0037 | 17.0 | 4947 | 1.9687 | 0.7211 |
| 0.0036 | 18.0 | 5238 | 1.9244 | 0.7285 |
| 0.0045 | 19.0 | 5529 | 1.9835 | 0.7299 |
| 0.0045 | 20.0 | 5820 | 2.0766 | 0.7139 |
| 0.0024 | 21.0 | 6111 | 2.1118 | 0.7144 |
| 0.0024 | 22.0 | 6402 | 2.0544 | 0.7197 |
| 0.0006 | 23.0 | 6693 | 2.0914 | 0.7217 |
| 0.0006 | 24.0 | 6984 | 2.1028 | 0.7195 |
| 0.0006 | 25.0 | 7275 | 2.1174 | 0.7224 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
RedPandaAINLP/opus-mt-en-ro-finetuned-en-to-ro | RedPandaAINLP | 2022-08-01T00:11:22Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-07-31T22:39:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
config: ro-en
split: train
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 28.1505
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2886
- Bleu: 28.1505
- Gen Len: 34.1036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7437 | 1.0 | 38145 | 1.2886 | 28.1505 | 34.1036 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
keithanpai/resnet-50-finetuned-eurosat | keithanpai | 2022-07-31T23:54:26Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-07-31T23:46:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: resnet-50-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6676646706586826
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-eurosat
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1981
- Accuracy: 0.6677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5279 | 0.99 | 70 | 1.5218 | 0.6677 |
| 1.1982 | 1.99 | 140 | 1.2405 | 0.6677 |
| 1.0836 | 2.99 | 210 | 1.1981 | 0.6677 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_3_ternary | elopezlopez | 2022-07-31T23:52:36Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-31T23:35:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_3_ternary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_3_ternary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7987
- F1: 0.7460
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.5903 | 0.6893 |
| 0.5417 | 2.0 | 578 | 0.5822 | 0.7130 |
| 0.5417 | 3.0 | 867 | 0.6471 | 0.7385 |
| 0.2298 | 4.0 | 1156 | 0.8933 | 0.7322 |
| 0.2298 | 5.0 | 1445 | 1.1002 | 0.7147 |
| 0.1012 | 6.0 | 1734 | 1.2041 | 0.7249 |
| 0.0508 | 7.0 | 2023 | 1.3575 | 0.7195 |
| 0.0508 | 8.0 | 2312 | 1.3896 | 0.7385 |
| 0.018 | 9.0 | 2601 | 1.5363 | 0.7238 |
| 0.018 | 10.0 | 2890 | 1.5336 | 0.7364 |
| 0.0142 | 11.0 | 3179 | 1.6335 | 0.7308 |
| 0.0142 | 12.0 | 3468 | 1.6915 | 0.7295 |
| 0.0047 | 13.0 | 3757 | 1.7087 | 0.7427 |
| 0.0058 | 14.0 | 4046 | 1.7875 | 0.7378 |
| 0.0058 | 15.0 | 4335 | 1.7649 | 0.7438 |
| 0.0051 | 16.0 | 4624 | 1.7987 | 0.7460 |
| 0.0051 | 17.0 | 4913 | 1.8435 | 0.7404 |
| 0.0025 | 18.0 | 5202 | 1.9623 | 0.7257 |
| 0.0025 | 19.0 | 5491 | 1.9005 | 0.7304 |
| 0.0029 | 20.0 | 5780 | 1.9437 | 0.7374 |
| 0.0011 | 21.0 | 6069 | 1.9840 | 0.7268 |
| 0.0011 | 22.0 | 6358 | 1.9411 | 0.7346 |
| 0.0025 | 23.0 | 6647 | 1.9233 | 0.7438 |
| 0.0025 | 24.0 | 6936 | 1.9415 | 0.7395 |
| 0.0015 | 25.0 | 7225 | 1.9481 | 0.7411 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/xlnet-base-cased_fold_3_binary | elopezlopez | 2022-07-31T23:37:52Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-31T23:14:01Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlnet-base-cased_fold_3_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased_fold_3_binary
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3616
- F1: 0.7758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.4668 | 0.7666 |
| 0.4142 | 2.0 | 578 | 0.4259 | 0.7631 |
| 0.4142 | 3.0 | 867 | 0.6744 | 0.7492 |
| 0.235 | 4.0 | 1156 | 0.8879 | 0.7678 |
| 0.235 | 5.0 | 1445 | 1.0036 | 0.7639 |
| 0.1297 | 6.0 | 1734 | 1.1427 | 0.7616 |
| 0.0894 | 7.0 | 2023 | 1.2126 | 0.7626 |
| 0.0894 | 8.0 | 2312 | 1.5098 | 0.7433 |
| 0.0473 | 9.0 | 2601 | 1.3616 | 0.7758 |
| 0.0473 | 10.0 | 2890 | 1.5966 | 0.7579 |
| 0.0325 | 11.0 | 3179 | 1.6669 | 0.7508 |
| 0.0325 | 12.0 | 3468 | 1.7401 | 0.7437 |
| 0.0227 | 13.0 | 3757 | 1.7797 | 0.7515 |
| 0.0224 | 14.0 | 4046 | 1.7349 | 0.7418 |
| 0.0224 | 15.0 | 4335 | 1.7527 | 0.7595 |
| 0.0152 | 16.0 | 4624 | 1.7492 | 0.7634 |
| 0.0152 | 17.0 | 4913 | 1.8178 | 0.7628 |
| 0.0117 | 18.0 | 5202 | 1.7736 | 0.7688 |
| 0.0117 | 19.0 | 5491 | 1.8449 | 0.7704 |
| 0.0055 | 20.0 | 5780 | 1.8687 | 0.7652 |
| 0.0065 | 21.0 | 6069 | 1.8083 | 0.7669 |
| 0.0065 | 22.0 | 6358 | 1.8568 | 0.7559 |
| 0.0054 | 23.0 | 6647 | 1.8760 | 0.7678 |
| 0.0054 | 24.0 | 6936 | 1.8948 | 0.7697 |
| 0.0048 | 25.0 | 7225 | 1.9109 | 0.7680 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_2_ternary | elopezlopez | 2022-07-31T23:35:04Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-31T23:17:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_2_ternary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_2_ternary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5810
- F1: 0.7620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 294 | 0.5886 | 0.7239 |
| 0.557 | 2.0 | 588 | 0.5085 | 0.7524 |
| 0.557 | 3.0 | 882 | 0.6332 | 0.7530 |
| 0.2456 | 4.0 | 1176 | 0.8749 | 0.7161 |
| 0.2456 | 5.0 | 1470 | 1.0601 | 0.7371 |
| 0.1112 | 6.0 | 1764 | 1.1885 | 0.7451 |
| 0.0484 | 7.0 | 2058 | 1.3027 | 0.7240 |
| 0.0484 | 8.0 | 2352 | 1.4647 | 0.7259 |
| 0.0259 | 9.0 | 2646 | 1.4476 | 0.7322 |
| 0.0259 | 10.0 | 2940 | 1.4826 | 0.7388 |
| 0.0164 | 11.0 | 3234 | 1.5869 | 0.7333 |
| 0.0109 | 12.0 | 3528 | 1.5954 | 0.7539 |
| 0.0109 | 13.0 | 3822 | 1.5810 | 0.7620 |
| 0.0082 | 14.0 | 4116 | 1.7165 | 0.7335 |
| 0.0082 | 15.0 | 4410 | 1.8152 | 0.7414 |
| 0.004 | 16.0 | 4704 | 1.7411 | 0.7474 |
| 0.004 | 17.0 | 4998 | 1.8692 | 0.7355 |
| 0.0034 | 18.0 | 5292 | 1.8727 | 0.7303 |
| 0.0009 | 19.0 | 5586 | 1.9813 | 0.7305 |
| 0.0009 | 20.0 | 5880 | 1.9764 | 0.7391 |
| 0.0012 | 21.0 | 6174 | 2.0170 | 0.7291 |
| 0.0012 | 22.0 | 6468 | 2.0240 | 0.7391 |
| 0.0004 | 23.0 | 6762 | 2.0311 | 0.7352 |
| 0.0014 | 24.0 | 7056 | 2.0174 | 0.7334 |
| 0.0014 | 25.0 | 7350 | 2.0282 | 0.7381 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_1_ternary | elopezlopez | 2022-07-31T23:17:34Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-31T21:10:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_1_ternary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_1_ternary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0582
- F1: 0.7326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 290 | 0.5524 | 0.6755 |
| 0.5648 | 2.0 | 580 | 0.5654 | 0.7124 |
| 0.5648 | 3.0 | 870 | 0.6547 | 0.6896 |
| 0.2712 | 4.0 | 1160 | 0.8916 | 0.7263 |
| 0.2712 | 5.0 | 1450 | 1.1187 | 0.7120 |
| 0.1147 | 6.0 | 1740 | 1.2778 | 0.7114 |
| 0.0476 | 7.0 | 2030 | 1.4441 | 0.7151 |
| 0.0476 | 8.0 | 2320 | 1.5535 | 0.7133 |
| 0.0187 | 9.0 | 2610 | 1.6439 | 0.7212 |
| 0.0187 | 10.0 | 2900 | 1.7261 | 0.7313 |
| 0.0138 | 11.0 | 3190 | 1.6806 | 0.7292 |
| 0.0138 | 12.0 | 3480 | 1.8425 | 0.7111 |
| 0.009 | 13.0 | 3770 | 1.9207 | 0.7213 |
| 0.0045 | 14.0 | 4060 | 1.8900 | 0.7202 |
| 0.0045 | 15.0 | 4350 | 1.9730 | 0.7216 |
| 0.0042 | 16.0 | 4640 | 2.0775 | 0.7041 |
| 0.0042 | 17.0 | 4930 | 2.0514 | 0.7106 |
| 0.0019 | 18.0 | 5220 | 2.0582 | 0.7326 |
| 0.0039 | 19.0 | 5510 | 2.1010 | 0.7081 |
| 0.0039 | 20.0 | 5800 | 2.0487 | 0.7273 |
| 0.0025 | 21.0 | 6090 | 2.0415 | 0.7254 |
| 0.0025 | 22.0 | 6380 | 2.0753 | 0.7157 |
| 0.0017 | 23.0 | 6670 | 2.0554 | 0.7246 |
| 0.0017 | 24.0 | 6960 | 2.0644 | 0.7290 |
| 0.0001 | 25.0 | 7250 | 2.0711 | 0.7310 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
keithanpai/vit-base-patch32-384-finetuned-eurosat | keithanpai | 2022-07-31T22:51:54Z | 54 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-07-31T19:46:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch32-384-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8423153692614771
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch32-384-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch32-384](https://huggingface.co/google/vit-base-patch32-384) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4381
- Accuracy: 0.8423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.607 | 0.99 | 70 | 0.5609 | 0.8014 |
| 0.5047 | 1.99 | 140 | 0.4634 | 0.8373 |
| 0.4089 | 2.99 | 210 | 0.4381 | 0.8423 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/xlnet-base-cased_fold_1_binary | elopezlopez | 2022-07-31T22:49:49Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-31T22:26:16Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlnet-base-cased_fold_1_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased_fold_1_binary
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7607
- F1: 0.7778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 288 | 0.4111 | 0.7555 |
| 0.4387 | 2.0 | 576 | 0.4075 | 0.7540 |
| 0.4387 | 3.0 | 864 | 0.5344 | 0.7567 |
| 0.2471 | 4.0 | 1152 | 0.7405 | 0.7597 |
| 0.2471 | 5.0 | 1440 | 1.0564 | 0.7508 |
| 0.1419 | 6.0 | 1728 | 1.0703 | 0.7751 |
| 0.0845 | 7.0 | 2016 | 1.0866 | 0.7609 |
| 0.0845 | 8.0 | 2304 | 1.2135 | 0.7751 |
| 0.05 | 9.0 | 2592 | 1.3649 | 0.7516 |
| 0.05 | 10.0 | 2880 | 1.4943 | 0.7590 |
| 0.0267 | 11.0 | 3168 | 1.5174 | 0.7412 |
| 0.0267 | 12.0 | 3456 | 1.4884 | 0.7559 |
| 0.0278 | 13.0 | 3744 | 1.5109 | 0.7405 |
| 0.0201 | 14.0 | 4032 | 1.7251 | 0.7409 |
| 0.0201 | 15.0 | 4320 | 1.5833 | 0.7354 |
| 0.0185 | 16.0 | 4608 | 1.7744 | 0.7598 |
| 0.0185 | 17.0 | 4896 | 1.8283 | 0.7619 |
| 0.0066 | 18.0 | 5184 | 1.7607 | 0.7778 |
| 0.0066 | 19.0 | 5472 | 1.7503 | 0.7719 |
| 0.0078 | 20.0 | 5760 | 1.7807 | 0.7508 |
| 0.006 | 21.0 | 6048 | 1.6887 | 0.7629 |
| 0.006 | 22.0 | 6336 | 1.7041 | 0.7678 |
| 0.0074 | 23.0 | 6624 | 1.7337 | 0.7633 |
| 0.0074 | 24.0 | 6912 | 1.7548 | 0.7645 |
| 0.0035 | 25.0 | 7200 | 1.7685 | 0.7621 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_6_binary | elopezlopez | 2022-07-31T22:25:18Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-31T22:14:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_6_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_6_binary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6838
- F1: 0.7881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 290 | 0.4181 | 0.7732 |
| 0.4097 | 2.0 | 580 | 0.3967 | 0.7697 |
| 0.4097 | 3.0 | 870 | 0.5811 | 0.7797 |
| 0.2034 | 4.0 | 1160 | 0.8684 | 0.7320 |
| 0.2034 | 5.0 | 1450 | 0.9116 | 0.7718 |
| 0.0794 | 6.0 | 1740 | 1.0588 | 0.7690 |
| 0.0278 | 7.0 | 2030 | 1.2092 | 0.7738 |
| 0.0278 | 8.0 | 2320 | 1.2180 | 0.7685 |
| 0.0233 | 9.0 | 2610 | 1.3005 | 0.7676 |
| 0.0233 | 10.0 | 2900 | 1.4009 | 0.7634 |
| 0.0093 | 11.0 | 3190 | 1.4528 | 0.7805 |
| 0.0093 | 12.0 | 3480 | 1.4803 | 0.7859 |
| 0.0088 | 13.0 | 3770 | 1.4775 | 0.7750 |
| 0.0077 | 14.0 | 4060 | 1.6171 | 0.7699 |
| 0.0077 | 15.0 | 4350 | 1.6429 | 0.7636 |
| 0.0047 | 16.0 | 4640 | 1.5619 | 0.7819 |
| 0.0047 | 17.0 | 4930 | 1.5833 | 0.7724 |
| 0.0034 | 18.0 | 5220 | 1.6400 | 0.7853 |
| 0.0008 | 19.0 | 5510 | 1.6508 | 0.7792 |
| 0.0008 | 20.0 | 5800 | 1.6838 | 0.7881 |
| 0.0009 | 21.0 | 6090 | 1.6339 | 0.7829 |
| 0.0009 | 22.0 | 6380 | 1.6824 | 0.7806 |
| 0.0016 | 23.0 | 6670 | 1.6867 | 0.7876 |
| 0.0016 | 24.0 | 6960 | 1.7107 | 0.7877 |
| 0.0013 | 25.0 | 7250 | 1.6933 | 0.7812 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_5_binary | elopezlopez | 2022-07-31T22:14:52Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-31T22:04:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_5_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_5_binary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5093
- F1: 0.7801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 288 | 0.4760 | 0.7315 |
| 0.3992 | 2.0 | 576 | 0.4428 | 0.7785 |
| 0.3992 | 3.0 | 864 | 0.5093 | 0.7801 |
| 0.2021 | 4.0 | 1152 | 0.6588 | 0.7634 |
| 0.2021 | 5.0 | 1440 | 0.9174 | 0.7713 |
| 0.0945 | 6.0 | 1728 | 0.9832 | 0.7726 |
| 0.0321 | 7.0 | 2016 | 1.2103 | 0.7672 |
| 0.0321 | 8.0 | 2304 | 1.3759 | 0.7616 |
| 0.0134 | 9.0 | 2592 | 1.4405 | 0.7570 |
| 0.0134 | 10.0 | 2880 | 1.4591 | 0.7710 |
| 0.0117 | 11.0 | 3168 | 1.4947 | 0.7713 |
| 0.0117 | 12.0 | 3456 | 1.6224 | 0.7419 |
| 0.0081 | 13.0 | 3744 | 1.6462 | 0.7520 |
| 0.0083 | 14.0 | 4032 | 1.6880 | 0.7637 |
| 0.0083 | 15.0 | 4320 | 1.7080 | 0.7380 |
| 0.0048 | 16.0 | 4608 | 1.7352 | 0.7551 |
| 0.0048 | 17.0 | 4896 | 1.6761 | 0.7713 |
| 0.0024 | 18.0 | 5184 | 1.7553 | 0.76 |
| 0.0024 | 19.0 | 5472 | 1.7312 | 0.7673 |
| 0.005 | 20.0 | 5760 | 1.7334 | 0.7713 |
| 0.0032 | 21.0 | 6048 | 1.7963 | 0.7578 |
| 0.0032 | 22.0 | 6336 | 1.7529 | 0.7679 |
| 0.0025 | 23.0 | 6624 | 1.7741 | 0.7662 |
| 0.0025 | 24.0 | 6912 | 1.7515 | 0.7679 |
| 0.0004 | 25.0 | 7200 | 1.7370 | 0.7765 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
wenkai-li/distilroberta-base-finetuned-wikitextepoch_150 | wenkai-li | 2022-07-31T22:09:24Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-31T18:31:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitextepoch_150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitextepoch_150
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.2428 | 1.0 | 1121 | 2.0500 |
| 2.1209 | 2.0 | 2242 | 1.9996 |
| 2.0665 | 3.0 | 3363 | 1.9501 |
| 2.0179 | 4.0 | 4484 | 1.9311 |
| 1.9759 | 5.0 | 5605 | 1.9255 |
| 1.9089 | 6.0 | 6726 | 1.8805 |
| 1.9143 | 7.0 | 7847 | 1.8715 |
| 1.8744 | 8.0 | 8968 | 1.8671 |
| 1.858 | 9.0 | 10089 | 1.8592 |
| 1.8141 | 10.0 | 11210 | 1.8578 |
| 1.7917 | 11.0 | 12331 | 1.8574 |
| 1.7752 | 12.0 | 13452 | 1.8423 |
| 1.7722 | 13.0 | 14573 | 1.8287 |
| 1.7354 | 14.0 | 15694 | 1.8396 |
| 1.7217 | 15.0 | 16815 | 1.8244 |
| 1.6968 | 16.0 | 17936 | 1.8278 |
| 1.659 | 17.0 | 19057 | 1.8412 |
| 1.6442 | 18.0 | 20178 | 1.8328 |
| 1.6441 | 19.0 | 21299 | 1.8460 |
| 1.6267 | 20.0 | 22420 | 1.8343 |
| 1.612 | 21.0 | 23541 | 1.8249 |
| 1.5963 | 22.0 | 24662 | 1.8253 |
| 1.6101 | 23.0 | 25783 | 1.7843 |
| 1.5747 | 24.0 | 26904 | 1.8047 |
| 1.5559 | 25.0 | 28025 | 1.8618 |
| 1.5484 | 26.0 | 29146 | 1.8660 |
| 1.5411 | 27.0 | 30267 | 1.8318 |
| 1.5247 | 28.0 | 31388 | 1.8216 |
| 1.5278 | 29.0 | 32509 | 1.8075 |
| 1.4954 | 30.0 | 33630 | 1.8073 |
| 1.4863 | 31.0 | 34751 | 1.7958 |
| 1.4821 | 32.0 | 35872 | 1.8080 |
| 1.4357 | 33.0 | 36993 | 1.8373 |
| 1.4602 | 34.0 | 38114 | 1.8199 |
| 1.447 | 35.0 | 39235 | 1.8325 |
| 1.4292 | 36.0 | 40356 | 1.8075 |
| 1.4174 | 37.0 | 41477 | 1.8168 |
| 1.4103 | 38.0 | 42598 | 1.8095 |
| 1.4168 | 39.0 | 43719 | 1.8233 |
| 1.4005 | 40.0 | 44840 | 1.8388 |
| 1.3799 | 41.0 | 45961 | 1.8235 |
| 1.3657 | 42.0 | 47082 | 1.8298 |
| 1.3559 | 43.0 | 48203 | 1.8165 |
| 1.3723 | 44.0 | 49324 | 1.8059 |
| 1.3535 | 45.0 | 50445 | 1.8451 |
| 1.3533 | 46.0 | 51566 | 1.8458 |
| 1.3469 | 47.0 | 52687 | 1.8237 |
| 1.3247 | 48.0 | 53808 | 1.8264 |
| 1.3142 | 49.0 | 54929 | 1.8209 |
| 1.2958 | 50.0 | 56050 | 1.8244 |
| 1.293 | 51.0 | 57171 | 1.8311 |
| 1.2784 | 52.0 | 58292 | 1.8287 |
| 1.2731 | 53.0 | 59413 | 1.8600 |
| 1.2961 | 54.0 | 60534 | 1.8086 |
| 1.2739 | 55.0 | 61655 | 1.8303 |
| 1.2716 | 56.0 | 62776 | 1.8214 |
| 1.2459 | 57.0 | 63897 | 1.8440 |
| 1.2492 | 58.0 | 65018 | 1.8503 |
| 1.2393 | 59.0 | 66139 | 1.8316 |
| 1.2077 | 60.0 | 67260 | 1.8283 |
| 1.2426 | 61.0 | 68381 | 1.8413 |
| 1.2032 | 62.0 | 69502 | 1.8461 |
| 1.2123 | 63.0 | 70623 | 1.8469 |
| 1.2069 | 64.0 | 71744 | 1.8478 |
| 1.198 | 65.0 | 72865 | 1.8479 |
| 1.1972 | 66.0 | 73986 | 1.8516 |
| 1.1885 | 67.0 | 75107 | 1.8341 |
| 1.1784 | 68.0 | 76228 | 1.8322 |
| 1.1866 | 69.0 | 77349 | 1.8559 |
| 1.1648 | 70.0 | 78470 | 1.8758 |
| 1.1595 | 71.0 | 79591 | 1.8684 |
| 1.1661 | 72.0 | 80712 | 1.8553 |
| 1.1478 | 73.0 | 81833 | 1.8658 |
| 1.1488 | 74.0 | 82954 | 1.8452 |
| 1.1538 | 75.0 | 84075 | 1.8505 |
| 1.1267 | 76.0 | 85196 | 1.8430 |
| 1.1339 | 77.0 | 86317 | 1.8333 |
| 1.118 | 78.0 | 87438 | 1.8419 |
| 1.12 | 79.0 | 88559 | 1.8669 |
| 1.1144 | 80.0 | 89680 | 1.8647 |
| 1.104 | 81.0 | 90801 | 1.8643 |
| 1.0864 | 82.0 | 91922 | 1.8528 |
| 1.0863 | 83.0 | 93043 | 1.8456 |
| 1.0912 | 84.0 | 94164 | 1.8509 |
| 1.0873 | 85.0 | 95285 | 1.8690 |
| 1.0862 | 86.0 | 96406 | 1.8577 |
| 1.0879 | 87.0 | 97527 | 1.8612 |
| 1.0783 | 88.0 | 98648 | 1.8410 |
| 1.0618 | 89.0 | 99769 | 1.8517 |
| 1.0552 | 90.0 | 100890 | 1.8459 |
| 1.0516 | 91.0 | 102011 | 1.8723 |
| 1.0424 | 92.0 | 103132 | 1.8832 |
| 1.0478 | 93.0 | 104253 | 1.8922 |
| 1.0523 | 94.0 | 105374 | 1.8753 |
| 1.027 | 95.0 | 106495 | 1.8625 |
| 1.0364 | 96.0 | 107616 | 1.8673 |
| 1.0203 | 97.0 | 108737 | 1.8806 |
| 1.0309 | 98.0 | 109858 | 1.8644 |
| 1.0174 | 99.0 | 110979 | 1.8659 |
| 1.0184 | 100.0 | 112100 | 1.8590 |
| 1.0234 | 101.0 | 113221 | 1.8614 |
| 1.013 | 102.0 | 114342 | 1.8866 |
| 1.0092 | 103.0 | 115463 | 1.8770 |
| 1.0051 | 104.0 | 116584 | 1.8445 |
| 1.0105 | 105.0 | 117705 | 1.8512 |
| 1.0233 | 106.0 | 118826 | 1.8896 |
| 0.9967 | 107.0 | 119947 | 1.8687 |
| 0.9795 | 108.0 | 121068 | 1.8618 |
| 0.9846 | 109.0 | 122189 | 1.8877 |
| 0.9958 | 110.0 | 123310 | 1.8522 |
| 0.9689 | 111.0 | 124431 | 1.8765 |
| 0.9879 | 112.0 | 125552 | 1.8692 |
| 0.99 | 113.0 | 126673 | 1.8689 |
| 0.9798 | 114.0 | 127794 | 1.8898 |
| 0.9676 | 115.0 | 128915 | 1.8782 |
| 0.9759 | 116.0 | 130036 | 1.8840 |
| 0.9576 | 117.0 | 131157 | 1.8662 |
| 0.9637 | 118.0 | 132278 | 1.8984 |
| 0.9645 | 119.0 | 133399 | 1.8872 |
| 0.9793 | 120.0 | 134520 | 1.8705 |
| 0.9643 | 121.0 | 135641 | 1.9036 |
| 0.961 | 122.0 | 136762 | 1.8683 |
| 0.9496 | 123.0 | 137883 | 1.8785 |
| 0.946 | 124.0 | 139004 | 1.8912 |
| 0.9681 | 125.0 | 140125 | 1.8837 |
| 0.9403 | 126.0 | 141246 | 1.8824 |
| 0.9452 | 127.0 | 142367 | 1.8824 |
| 0.9437 | 128.0 | 143488 | 1.8665 |
| 0.945 | 129.0 | 144609 | 1.8655 |
| 0.9453 | 130.0 | 145730 | 1.8695 |
| 0.9238 | 131.0 | 146851 | 1.8697 |
| 0.9176 | 132.0 | 147972 | 1.8618 |
| 0.9405 | 133.0 | 149093 | 1.8679 |
| 0.9184 | 134.0 | 150214 | 1.9025 |
| 0.9298 | 135.0 | 151335 | 1.9045 |
| 0.9215 | 136.0 | 152456 | 1.9014 |
| 0.9249 | 137.0 | 153577 | 1.8505 |
| 0.9246 | 138.0 | 154698 | 1.8542 |
| 0.9205 | 139.0 | 155819 | 1.8731 |
| 0.9368 | 140.0 | 156940 | 1.8673 |
| 0.9251 | 141.0 | 158061 | 1.8835 |
| 0.9224 | 142.0 | 159182 | 1.8727 |
| 0.9326 | 143.0 | 160303 | 1.8380 |
| 0.916 | 144.0 | 161424 | 1.8857 |
| 0.9361 | 145.0 | 162545 | 1.8547 |
| 0.9121 | 146.0 | 163666 | 1.8587 |
| 0.9156 | 147.0 | 164787 | 1.8863 |
| 0.9131 | 148.0 | 165908 | 1.8809 |
| 0.9185 | 149.0 | 167029 | 1.8734 |
| 0.9183 | 150.0 | 168150 | 1.8929 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.5.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_4_binary | elopezlopez | 2022-07-31T22:04:12Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-31T21:54:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_4_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_4_binary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2977
- F1: 0.8083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.3701 | 0.7903 |
| 0.4005 | 2.0 | 578 | 0.3669 | 0.7994 |
| 0.4005 | 3.0 | 867 | 0.5038 | 0.7955 |
| 0.1945 | 4.0 | 1156 | 0.6353 | 0.8006 |
| 0.1945 | 5.0 | 1445 | 0.8974 | 0.7826 |
| 0.0909 | 6.0 | 1734 | 0.8533 | 0.7764 |
| 0.0389 | 7.0 | 2023 | 0.9969 | 0.7957 |
| 0.0389 | 8.0 | 2312 | 1.0356 | 0.7952 |
| 0.0231 | 9.0 | 2601 | 1.1538 | 0.7963 |
| 0.0231 | 10.0 | 2890 | 1.2011 | 0.7968 |
| 0.0051 | 11.0 | 3179 | 1.2329 | 0.7935 |
| 0.0051 | 12.0 | 3468 | 1.2829 | 0.8056 |
| 0.0066 | 13.0 | 3757 | 1.2946 | 0.7956 |
| 0.004 | 14.0 | 4046 | 1.2977 | 0.8083 |
| 0.004 | 15.0 | 4335 | 1.3970 | 0.7957 |
| 0.0007 | 16.0 | 4624 | 1.3361 | 0.7917 |
| 0.0007 | 17.0 | 4913 | 1.5782 | 0.7954 |
| 0.0107 | 18.0 | 5202 | 1.4641 | 0.7900 |
| 0.0107 | 19.0 | 5491 | 1.4490 | 0.7957 |
| 0.0058 | 20.0 | 5780 | 1.4607 | 0.7932 |
| 0.0016 | 21.0 | 6069 | 1.5048 | 0.7939 |
| 0.0016 | 22.0 | 6358 | 1.5219 | 0.7945 |
| 0.0027 | 23.0 | 6647 | 1.4783 | 0.7937 |
| 0.0027 | 24.0 | 6936 | 1.4715 | 0.7981 |
| 0.0004 | 25.0 | 7225 | 1.4989 | 0.7900 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_1_binary | elopezlopez | 2022-07-31T21:33:03Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-31T20:57:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_1_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_1_binary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5992
- F1: 0.7687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 288 | 0.3960 | 0.7467 |
| 0.3988 | 2.0 | 576 | 0.3947 | 0.7487 |
| 0.3988 | 3.0 | 864 | 0.4511 | 0.7662 |
| 0.1853 | 4.0 | 1152 | 0.7226 | 0.7285 |
| 0.1853 | 5.0 | 1440 | 0.9398 | 0.7334 |
| 0.0827 | 6.0 | 1728 | 1.0547 | 0.7427 |
| 0.0287 | 7.0 | 2016 | 1.1602 | 0.7563 |
| 0.0287 | 8.0 | 2304 | 1.3332 | 0.7171 |
| 0.0219 | 9.0 | 2592 | 1.3429 | 0.7420 |
| 0.0219 | 10.0 | 2880 | 1.2603 | 0.7648 |
| 0.0139 | 11.0 | 3168 | 1.4126 | 0.7569 |
| 0.0139 | 12.0 | 3456 | 1.3195 | 0.7483 |
| 0.0115 | 13.0 | 3744 | 1.4356 | 0.7491 |
| 0.0035 | 14.0 | 4032 | 1.5693 | 0.7636 |
| 0.0035 | 15.0 | 4320 | 1.4071 | 0.7662 |
| 0.0071 | 16.0 | 4608 | 1.4561 | 0.7579 |
| 0.0071 | 17.0 | 4896 | 1.5405 | 0.7634 |
| 0.0041 | 18.0 | 5184 | 1.5862 | 0.7589 |
| 0.0041 | 19.0 | 5472 | 1.6782 | 0.76 |
| 0.0024 | 20.0 | 5760 | 1.5699 | 0.7677 |
| 0.0006 | 21.0 | 6048 | 1.5991 | 0.7467 |
| 0.0006 | 22.0 | 6336 | 1.6205 | 0.7682 |
| 0.0003 | 23.0 | 6624 | 1.6334 | 0.7643 |
| 0.0003 | 24.0 | 6912 | 1.5992 | 0.7687 |
| 0.0011 | 25.0 | 7200 | 1.6053 | 0.7624 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
abdulmatinomotoso/t5_large_headline_generator_testing_3 | abdulmatinomotoso | 2022-07-31T21:14:12Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-07-31T18:03:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5_large_headline_generator_testing_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_large_headline_generator_testing_3
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9638 | 0.79 | 500 | 0.8474 |
| 0.8478 | 1.57 | 1000 | 0.8356 |
| 0.6981 | 2.36 | 1500 | 0.8407 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
DS-20202/DoubleHardDebias | DS-20202 | 2022-07-31T20:32:45Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2022-07-31T12:08:09Z | ---
title: Double Hard Debiasing
emoji: 👁
colorFrom: blue
colorTo: pink
sdk: gradio
sdk_version: 3.1.1
app_file: app.py
pinned: false
license: mit
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
neuralmagic/oBERT-6-downstream-pruned-unstructured-90-squadv1 | neuralmagic | 2022-07-31T19:52:34Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T14:00:05Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-6-downstream-pruned-unstructured-90-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 6 Layers - Sparsity 90% - unstructured`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 6
```
The dev-set performance of this model:
```
EM = 79.16
F1 = 86.78
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-6-downstream-pruned-unstructured-80-squadv1 | neuralmagic | 2022-07-31T19:52:34Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:59:52Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-6-downstream-pruned-unstructured-80-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 6 Layers - Sparsity 80% - unstructured`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 80%
Number of layers: 6
```
The dev-set performance of this model:
```
EM = 81.15
F1 = 88.20
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-6-downstream-pruned-block4-80-squadv1 | neuralmagic | 2022-07-31T19:52:34Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T14:00:18Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-6-downstream-pruned-block4-80-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 6 Layers - Sparsity 80% - 4-block`.
```
Pruning method: oBERT downstream block-4
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 80%
Number of layers: 6
```
The dev-set performance of this model:
```
EM = 79.55
F1 = 87.00
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-6-downstream-pruned-block4-90-squadv1 | neuralmagic | 2022-07-31T19:52:34Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T14:00:31Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-6-downstream-pruned-block4-90-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 6 Layers - Sparsity 90% - 4-block`.
```
Pruning method: oBERT downstream block-4
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 6
```
The dev-set performance of this model:
```
EM = 77.65
F1 = 85.34
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-3-downstream-dense-squadv1 | neuralmagic | 2022-07-31T19:52:33Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T14:00:43Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-3-downstream-dense-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 3 Layers - 0% Sparsity`, and it represents an upper bound for performance of the corresponding pruned models:
- 80% unstructured: `neuralmagic/oBERT-3-downstream-pruned-unstructured-80-squadv1`
- 80% block-4: `neuralmagic/oBERT-3-downstream-pruned-block4-80-squadv1`
- 90% unstructured: `neuralmagic/oBERT-3-downstream-pruned-unstructured-90-squadv1`
- 90% block-4: `neuralmagic/oBERT-3-downstream-pruned-block4-90-squadv1`
SQuADv1 dev-set:
```
EM = 76.62
F1 = 84.65
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-3-downstream-pruned-block4-80-QAT-squadv1 | neuralmagic | 2022-07-31T19:52:33Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T19:21:28Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-3-downstream-pruned-block4-80-QAT-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 3 Layers - Sparsity 80% - 4-block + QAT`.
```
Pruning method: oBERT downstream block-4 + QAT
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 80%
Number of layers: 3
```
The dev-set performance of this model:
```
EM = 72.70
F1 = 82.04
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-6-downstream-dense-squadv1 | neuralmagic | 2022-07-31T19:52:33Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:59:35Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-6-downstream-dense-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 6 Layers - 0% Sparsity`, and it represents an upper bound for performance of the corresponding pruned models:
- 80% unstructured: `neuralmagic/oBERT-6-downstream-pruned-unstructured-80-squadv1`
- 80% block-4: `neuralmagic/oBERT-6-downstream-pruned-block4-80-squadv1`
- 90% unstructured: `neuralmagic/oBERT-6-downstream-pruned-unstructured-90-squadv1`
- 90% block-4: `neuralmagic/oBERT-6-downstream-pruned-block4-90-squadv1`
SQuADv1 dev-set:
```
EM = 81.17
F1 = 88.32
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-3-downstream-pruned-block4-90-squadv1 | neuralmagic | 2022-07-31T19:52:33Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T14:01:41Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-3-downstream-pruned-block4-90-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 3 Layers - Sparsity 90% - 4-block`.
```
Pruning method: oBERT downstream block-4
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 3
```
The dev-set performance of this model:
```
EM = 71.36
F1 = 80.69
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-qqp | neuralmagic | 2022-07-31T19:52:32Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:qqp",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:58:41Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: qqp
---
# oBERT-12-upstream-pruned-unstructured-97-finetuned-qqp
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - QQP 97%`.
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: QQP
Sparsity: 97%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 97% | acc | F1 |
| ------------ | ----- | ----- |
| seed=42 (*)| 89.85 | 86.41 |
| seed=3407 | 89.72 | 86.42 |
| seed=54321 | 89.70 | 86.24 |
| ------------ | ----- | ----- |
| mean | 89.76 | 86.35 |
| stdev | 0.081 | 0.101 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-squadv1-v2 | neuralmagic | 2022-07-31T19:52:32Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-06-17T07:30:56Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-upstream-pruned-unstructured-97-finetuned-squadv1-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - SQuADv1 97%` (in the upcoming updated version of the paper).
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 97%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over four seeds, and we release the best model (marked with `(*)`):
```
| oBERT 97% | F1 | EM |
| ------------- | ----- | ----- |
| seed=42 | 84.92 | 76.94 |
| seed=3407 | 84.87 | 76.71 |
| seed=123 | 84.95 | 77.06 |
| seed=12345 (*)| 84.95 | 76.90 |
| ------------- | ----- | ----- |
| mean | 84.92 | 76.90 |
| stdev | 0.037 | 0.145 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-12-upstream-pruned-unstructured-97-v2 | neuralmagic | 2022-07-31T19:52:32Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-06-17T07:25:30Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets:
- bookcorpus
- wikipedia
---
# oBERT-12-upstream-pruned-unstructured-97-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the upstream pruned model used as a starting point for sparse-transfer learning to downstream tasks presented in the `Table 2 - oBERT - {SQuADv1, MNLI, QQP} - 97%` (in the upcoming updated version of the paper).
Finetuned versions of this model for each downstream task are:
- SQuADv1: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-squadv1-v2`
- MNLI: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-mnli-v2`
- QQP: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-qqp-v2`
```
Pruning method: oBERT upstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: BookCorpus and English Wikipedia
Sparsity: 97%
Number of layers: 12
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-qqp-v2 | neuralmagic | 2022-07-31T19:52:32Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:qqp",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-06-17T07:31:57Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: qqp
---
# oBERT-12-upstream-pruned-unstructured-97-finetuned-qqp-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - QQP 97%` (in the upcoming updated version of the paper).
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: QQP
Sparsity: 97%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over four seeds, and we release the best model (marked with `(*)`):
```
| oBERT 97% | acc | F1 |
| ------------ | ----- | ----- |
| seed=42 (*)| 90.42 | 87.09 |
| seed=3407 | 90.31 | 86.87 |
| seed=123 | 90.20 | 86.76 |
| seed=12345 | 90.39 | 87.16 |
| ------------ | ----- | ----- |
| mean | 90.33 | 86.97 |
| stdev | 0.098 | 0.186 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-12-upstream-pruned-unstructured-90-v2 | neuralmagic | 2022-07-31T19:52:32Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-06-17T07:22:37Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets:
- bookcorpus
- wikipedia
---
# oBERT-12-upstream-pruned-unstructured-90-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the upstream pruned model used as a starting point for sparse-transfer learning to downstream tasks presented in the `Table 2 - oBERT - {SQuADv1, MNLI, QQP} - 90%` (in the upcoming updated version of the paper).
Finetuned versions of this model for each downstream task are:
- SQuADv1: `neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-squadv1-v2`
- MNLI: `neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-mnli-v2`
- QQP: `neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-qqp-v2`
```
Pruning method: oBERT upstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: BookCorpus and English Wikipedia
Sparsity: 90%
Number of layers: 12
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-12-downstream-pruned-unstructured-80-squadv1 | neuralmagic | 2022-07-31T19:52:31Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:53:16Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-downstream-pruned-unstructured-80-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - SQuADv1 80%`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 80%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 80% | F1 | EM |
| ------------ | ----- | ----- |
| seed=42 | 88.95 | 82.08 |
| seed=3407 (*)| 89.16 | 82.05 |
| seed=54321 | 89.01 | 82.12 |
| ------------ | ----- | ----- |
| mean | 89.04 | 82.08 |
| stdev | 0.108 | 0.035 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-12-downstream-pruned-unstructured-97-mnli | neuralmagic | 2022-07-31T19:52:31Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:mnli",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:55:09Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: mnli
---
# oBERT-12-downstream-pruned-unstructured-97-mnli
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - MNLI 97%`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: MNLI
Sparsity: 97%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 97% | m-acc | mm-acc|
| ------------ | ----- | ----- |
| seed=42 (*)| 82.10 | 81.94 |
| seed=3407 | 81.81 | 82.27 |
| seed=54321 | 81.40 | 81.83 |
| ------------ | ----- | ----- |
| mean | 81.77 | 82.01 |
| stdev | 0.351 | 0.228 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-12-downstream-pruned-unstructured-97-qqp | neuralmagic | 2022-07-31T19:52:31Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:qqp",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:56:04Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: qqp
---
# oBERT-12-downstream-pruned-unstructured-97-qqp
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - QQP 97%`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: QQP
Sparsity: 97%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 97% | acc | F1 |
| ------------ | ----- | ----- |
| seed=42 (*)| 90.90 | 87.73 |
| seed=3407 | 90.80 | 87.57 |
| seed=54321 | 90.90 | 87.69 |
| ------------ | ----- | ----- |
| mean | 90.87 | 87.66 |
| stdev | 0.057 | 0.083 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-12-downstream-pruned-block4-90-QAT-squadv1 | neuralmagic | 2022-07-31T19:52:30Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T19:20:22Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-downstream-pruned-block4-90-QAT-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 12 Layers - Sparsity 90% - 4-block + QAT`.
```
Pruning method: oBERT downstream block-4 + QAT
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 12
```
The dev-set performance of this model:
```
EM = 78.84
F1 = 86.68
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-12-downstream-pruned-block4-90-squadv1 | neuralmagic | 2022-07-31T19:52:30Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:59:21Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-downstream-pruned-block4-90-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 12 Layers - Sparsity 90% - 4-block`.
```
Pruning method: oBERT downstream block-4
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 12
```
The dev-set performance of this model:
```
EM = 80.14
F1 = 87.57
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-12-downstream-pruned-block4-80-QAT-squadv1 | neuralmagic | 2022-07-31T19:52:30Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T19:20:09Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-downstream-pruned-block4-80-QAT-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 12 Layers - Sparsity 80% - 4-block + QAT`.
```
Pruning method: oBERT downstream block-4 + QAT
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 80%
Number of layers: 12
```
The dev-set performance of this model:
```
EM = 80.58
F1 = 87.89
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-mnli-v2 | neuralmagic | 2022-07-31T19:50:41Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:mnli",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-06-17T07:31:30Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: mnli
---
# oBERT-12-upstream-pruned-unstructured-97-finetuned-mnli-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - MNLI 97%` (in the upcoming updated version of the paper).
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: MNLI
Sparsity: 97%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over four seeds, and we release the best model (marked with `(*)`):
```
| oBERT 97% | m-acc | mm-acc|
| ------------- | ----- | ----- |
| seed=42 | 80.86 | 80.88 |
| seed=3407 | 80.83 | 81.65 |
| seed=123 (*)| 81.18 | 81.06 |
| seed=12345 | 80.79 | 80.95 |
| ------------- | ----- | ----- |
| mean | 80.91 | 81.13 |
| stdev | 0.178 | 0.351 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
SummerChiam/pond_image_classification_12 | SummerChiam | 2022-07-31T16:07:40Z | 54 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-07-31T16:07:23Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pond_image_classification_12
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.997732400894165
---
# pond_image_classification_12
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Algae0

#### Boiling0

#### BoilingNight0

#### Normal0

#### NormalCement0

#### NormalNight0

#### NormalRain0
 |
anneke/finetuning-distilbert-base-uncased-finetuned-sst-2-english-5000-samples-final | anneke | 2022-07-31T16:05:59Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-31T15:49:08Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-distilbert-base-uncased-finetuned-sst-2-english-5000-samples-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-distilbert-base-uncased-finetuned-sst-2-english-5000-samples-final
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1289
- Accuracy: 0.977
- F1: 0.9878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
SummerChiam/pond_image_classification_11 | SummerChiam | 2022-07-31T15:36:10Z | 50 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-07-31T15:35:57Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pond_image_classification_11
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9951980710029602
---
# pond_image_classification_11
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Algae0

#### Boiling0

#### BoilingNight0

#### Normal0

#### NormalCement0

#### NormalNight0

#### NormalRain0
 |
samwit/ddpm-afhq-cats-128 | samwit | 2022-07-31T15:31:53Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-07-31T00:49:28Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-afhq-cats-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/samwit/ddpm-afhq-cats-128/tensorboard?#scalars)
|
QuickSilver007/Reinforce-Pixelcopter-PLE-v0 | QuickSilver007 | 2022-07-31T13:57:28Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-31T13:57:21Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- metrics:
- type: mean_reward
value: 21.60 +/- 15.87
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Kinahem/Reinforce-3 | Kinahem | 2022-07-31T13:02:51Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-31T13:02:35Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-3
results:
- metrics:
- type: mean_reward
value: 471.20 +/- 86.40
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Vasanth/bert_emo_classifier | Vasanth | 2022-07-31T12:34:43Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-30T23:30:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: bert_emo_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_emo_classifier
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9063 | 0.25 | 500 | 0.4845 |
| 0.3362 | 0.5 | 1000 | 0.3492 |
| 0.2759 | 0.75 | 1500 | 0.2819 |
| 0.2521 | 1.0 | 2000 | 0.2464 |
| 0.1705 | 1.25 | 2500 | 0.2345 |
| 0.1841 | 1.5 | 3000 | 0.2013 |
| 0.1428 | 1.75 | 3500 | 0.1926 |
| 0.1747 | 2.0 | 4000 | 0.1866 |
| 0.1082 | 2.25 | 4500 | 0.2302 |
| 0.1142 | 2.5 | 5000 | 0.2118 |
| 0.1205 | 2.75 | 5500 | 0.2318 |
| 0.1135 | 3.0 | 6000 | 0.2306 |
| 0.0803 | 3.25 | 6500 | 0.2625 |
| 0.0745 | 3.5 | 7000 | 0.2850 |
| 0.085 | 3.75 | 7500 | 0.2719 |
| 0.0701 | 4.0 | 8000 | 0.2748 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.10.3
|
fabf21/finetuning-sentiment-model-3000-samples | fabf21 | 2022-07-31T11:16:46Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-31T11:05:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Okyx/finetuned-amazon-en-es | Okyx | 2022-07-31T10:33:05Z | 10 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-07-31T09:41:05Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Okyx/finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Okyx/finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.0154
- Validation Loss: 3.3292
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 9.2009 | 4.0465 | 0 |
| 5.7436 | 3.6640 | 1 |
| 5.0419 | 3.5296 | 2 |
| 4.6412 | 3.4582 | 3 |
| 4.3722 | 3.3943 | 4 |
| 4.1947 | 3.3610 | 5 |
| 4.0747 | 3.3295 | 6 |
| 4.0154 | 3.3292 | 7 |
### Framework versions
- Transformers 4.21.0
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Neha2608/xlm-roberta-base-finetuned-panx-it | Neha2608 | 2022-07-31T10:26:20Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-07-02T11:59:49Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2740
- F1: 0.7919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8185 | 1.0 | 70 | 0.3369 | 0.7449 |
| 0.2899 | 2.0 | 140 | 0.2740 | 0.7919 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Neha2608/xlm-roberta-base-finetuned-panx-en-fr | Neha2608 | 2022-07-31T09:39:57Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-07-30T21:07:30Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2992
- F1: 0.8056
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5794 | 1.0 | 240 | 0.3464 | 0.7607 |
| 0.2819 | 2.0 | 480 | 0.2992 | 0.8056 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Okyx/finetuned-imdb | Okyx | 2022-07-31T06:34:07Z | 3 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-31T06:26:46Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Okyx/finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Okyx/finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8587
- Validation Loss: 2.6062
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -687, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8587 | 2.6062 | 0 |
### Framework versions
- Transformers 4.21.0
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ijnekonasa/ppo-LunarLander-v2 | ijnekonasa | 2022-07-31T03:58:07Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-31T03:57:44Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 252.64 +/- 18.29
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
huggingtweets/brickware | huggingtweets | 2022-07-31T01:55:30Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-07-31T01:54:29Z | ---
language: en
thumbnail: http://www.huggingtweets.com/brickware/1659232526175/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/878332749706178560/7iT6fwNt_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dr. Lauren Bricker, at my cat’s service</div>
<div style="text-align: center; font-size: 14px;">@brickware</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dr. Lauren Bricker, at my cat’s service.
| Data | Dr. Lauren Bricker, at my cat’s service |
| --- | --- |
| Tweets downloaded | 2359 |
| Retweets | 417 |
| Short tweets | 168 |
| Tweets kept | 1774 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/9xdpwk6e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @brickware's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/epqd03zr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/epqd03zr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/brickware')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Frikallo/vgdunkeybot | Frikallo | 2022-07-31T01:18:02Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-07-31T00:32:04Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: vgdunkeybot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vgdunkeybot
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001372
- train_batch_size: 1
- eval_batch_size: 8
- seed: 3313214263
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gazzehamine/data-augmentation-whitenoise-timit-1155 | gazzehamine | 2022-07-30T17:51:37Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-29T14:52:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: data-augmentation-whitenoise-timit-1155
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data-augmentation-whitenoise-timit-1155
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5458
- Wer: 0.3324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5204 | 0.8 | 500 | 1.6948 | 0.9531 |
| 0.8435 | 1.6 | 1000 | 0.5367 | 0.5113 |
| 0.4449 | 2.4 | 1500 | 0.4612 | 0.4528 |
| 0.3182 | 3.21 | 2000 | 0.4314 | 0.4156 |
| 0.2328 | 4.01 | 2500 | 0.4250 | 0.4031 |
| 0.1897 | 4.81 | 3000 | 0.4630 | 0.4023 |
| 0.1628 | 5.61 | 3500 | 0.4445 | 0.3922 |
| 0.1472 | 6.41 | 4000 | 0.4452 | 0.3793 |
| 0.1293 | 7.21 | 4500 | 0.4715 | 0.3847 |
| 0.1176 | 8.01 | 5000 | 0.4267 | 0.3757 |
| 0.1023 | 8.81 | 5500 | 0.4494 | 0.3821 |
| 0.092 | 9.62 | 6000 | 0.4501 | 0.3704 |
| 0.0926 | 10.42 | 6500 | 0.4722 | 0.3643 |
| 0.0784 | 11.22 | 7000 | 0.5033 | 0.3765 |
| 0.077 | 12.02 | 7500 | 0.5165 | 0.3684 |
| 0.0704 | 12.82 | 8000 | 0.5138 | 0.3646 |
| 0.0599 | 13.62 | 8500 | 0.5664 | 0.3674 |
| 0.0582 | 14.42 | 9000 | 0.5188 | 0.3575 |
| 0.0526 | 15.22 | 9500 | 0.5605 | 0.3621 |
| 0.0512 | 16.03 | 10000 | 0.5400 | 0.3585 |
| 0.0468 | 16.83 | 10500 | 0.5471 | 0.3603 |
| 0.0445 | 17.63 | 11000 | 0.5168 | 0.3555 |
| 0.0411 | 18.43 | 11500 | 0.5772 | 0.3542 |
| 0.0394 | 19.23 | 12000 | 0.5079 | 0.3567 |
| 0.0354 | 20.03 | 12500 | 0.5427 | 0.3613 |
| 0.0325 | 20.83 | 13000 | 0.5532 | 0.3572 |
| 0.0318 | 21.63 | 13500 | 0.5223 | 0.3514 |
| 0.0269 | 22.44 | 14000 | 0.6002 | 0.3460 |
| 0.028 | 23.24 | 14500 | 0.5591 | 0.3432 |
| 0.0254 | 24.04 | 15000 | 0.5837 | 0.3432 |
| 0.0235 | 24.84 | 15500 | 0.5571 | 0.3397 |
| 0.0223 | 25.64 | 16000 | 0.5470 | 0.3383 |
| 0.0193 | 26.44 | 16500 | 0.5611 | 0.3367 |
| 0.0227 | 27.24 | 17000 | 0.5405 | 0.3342 |
| 0.0183 | 28.04 | 17500 | 0.5205 | 0.3330 |
| 0.017 | 28.85 | 18000 | 0.5512 | 0.3330 |
| 0.0167 | 29.65 | 18500 | 0.5458 | 0.3324 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
huggingtweets/oooo_honey | huggingtweets | 2022-07-30T16:30:09Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-07-30T16:18:37Z | ---
language: en
thumbnail: http://www.huggingtweets.com/oooo_honey/1659198603893/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1442126088944062469/p-BikvvS_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rock'n'Pomp</div>
<div style="text-align: center; font-size: 14px;">@oooo_honey</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rock'n'Pomp.
| Data | Rock'n'Pomp |
| --- | --- |
| Tweets downloaded | 510 |
| Retweets | 100 |
| Short tweets | 48 |
| Tweets kept | 362 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/28blz6k6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @oooo_honey's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/35awxfoc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/35awxfoc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/oooo_honey')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
anzorq/kbd_lat-ru_char_tokenizer | anzorq | 2022-07-30T16:16:55Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"translation",
"ru",
"kbd",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | translation | 2022-07-29T10:31:32Z | ---
language:
- ru
- kbd
tags:
- translation
--- |
mbarnig/lb-de-fr-en-pt-coqui-stt-models | mbarnig | 2022-07-30T16:14:02Z | 0 | 1 | null | [
"tflite",
"tensorboard",
"STT",
"ASR",
"audio",
"speech recognition",
"coqui.ai",
"lb",
"de",
"fr",
"en",
"pt",
"dataset:mbarnig/lb-2880-STT-CORPUS",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2022-07-24T15:28:08Z | ---
license: cc-by-nc-sa-4.0
language:
- lb
- de
- fr
- en
- pt
tags:
- STT
- ASR
- audio
- speech recognition
- coqui.ai
datasets:
- mbarnig/lb-2880-STT-CORPUS
---
#### The luxembourgish part of my multilingual automatic speech recognition (ASR) model is the second Machine Learning (ML) STT model for Luxembourgish. The very first model has been published in May 2022 by [Pr Peter Gilles](https://infolux.uni.lu/automatic-speech-recognition-in-luxembourgish-a-very-first-model/) of the University of Luxembourg.
#### My model has been trained from scratch with my customized dataset [mbarnig/lb-2880-STT_CORPUS](https://huggingface.co/datasets/mbarnig/lb-2880-STT-CORPUS) and the deep-learning-toolkit 🐸 [Coqui-STT](https://github.com/coqui-ai/STT) (version 1.3.0). The model was trained without punctuations with the following alphabet:
```
# Each line in this file represents the Unicode codepoint (UTF-8 encoded)
# associated with a numeric index.
# A line that starts with # is a comment. You can escape it with \# if you wish
# to use '#' in the Alphabet.
'abcdefghijklmnopqrstuvwxyz àáâäçèéëîôöûü
# The last (non-comment) line needs to end with a newline.
```
#### A live inference-demo of the ASR system is available in my HuggingFace space ⌨️ 🇱🇺 🔈 [mbarnig/lb-de-fr-en-pt-COQUI-STT](https://huggingface.co/spaces/mbarnig/lb-de-fr-en-pt-COQUI-STT).
#### Click the tab *training metrics* above to view the live Tensorboard of the model training with the small (2880 samples), with the expanded (27072 samples) dataset, each with and without data augmentation.

#### The speech recognition models for the other languages have been released by Coqui.ai in the [model zoo](https://coqui.ai/models). I use the following versions in my ASR system:
* [French STT v0.9](https://coqui.ai/french/commonvoice-fr/v0.9) Dataset : common-voice.fr
* [German STT v0.9](https://coqui.ai/german/AASHISHAG/v0.9.0) Datasets : Common Voice 5.1, SWC , MAILABS, Tuda-De, Voxforge
* [English STT huge vocab v1.0](https://coqui.ai/english/coqui/v1.0.0-huge-vocab) Datasets : Common Voice 7.0, Librispeech
* [Portuguese STT v0.1.1](https://coqui.ai/portuguese/itml/v0.1.1) Dataset : Common Voice 6.1 |
Neha2608/pegasus-samsum | Neha2608 | 2022-07-30T14:11:54Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-07-03T10:25:47Z | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7003 | 0.54 | 500 | 1.4859 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
constanter/PPO-LunarLander-v2 | constanter | 2022-07-30T13:34:25Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-30T13:33:54Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 268.37 +/- 20.32
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
robingeibel/reformer-big_patent-wikipedia-arxiv-16384 | robingeibel | 2022-07-30T13:26:44Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"reformer",
"fill-mask",
"generated_from_trainer",
"dataset:big_patent",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-27T12:05:01Z | ---
tags:
- generated_from_trainer
datasets:
- big_patent
model-index:
- name: reformer-big_patent-wikipedia-arxiv-16384
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reformer-big_patent-wikipedia-arxiv-16384
This model is a fine-tuned version of [robingeibel/reformer-big_patent-wikipedia-arxiv-16384](https://huggingface.co/robingeibel/reformer-big_patent-wikipedia-arxiv-16384) on the big_patent dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 5.8656 | 1.0 | 22242 | 5.8649 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
SummerChiam/rust_image_classification_9 | SummerChiam | 2022-07-30T12:33:20Z | 50 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-07-30T12:33:08Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rust_image_classification_9
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9569620490074158
---
# rust_image_classification_9
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### nonrust

#### rust
 |
Neha2608/distilbert-base-uncased-finetuned-emotion | Neha2608 | 2022-07-30T09:43:41Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-28T20:29:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: F1
type: f1
value: 0.9184567794520658
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2207
- Accuracy is: 0.9185
- F1: 0.9185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy is | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:------:|
| 0.8026 | 1.0 | 250 | 0.3114 | 0.905 | 0.9035 |
| 0.2409 | 2.0 | 500 | 0.2207 | 0.9185 | 0.9185 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
SummerChiam/pond_image_classification_10 | SummerChiam | 2022-07-30T08:57:50Z | 50 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-07-30T08:57:38Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pond_image_classification_10
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9948979616165161
---
# pond_image_classification_10
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Algae

#### Boiling

#### BoilingNight

#### Normal

#### NormalCement

#### NormalNight

#### NormalRain
 |
DrY/marian-finetuned-kde4-en-to-zh | DrY | 2022-07-30T08:05:06Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-07-30T07:03:00Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-zh
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-zh_CN
split: train
args: en-zh_CN
metrics:
- name: Bleu
type: bleu
value: 40.66579724271391
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-zh
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9338
- Bleu: 40.6658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
r3sist/q-Taxi-v3 | r3sist | 2022-07-30T07:56:02Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-30T07:55:55Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="r3sist/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
mbarnig/lb-de-fr-en-pt-coqui-vits-tts | mbarnig | 2022-07-30T06:00:58Z | 222 | 7 | transformers | [
"transformers",
"tensorboard",
"TTS",
"audio",
"synthesis",
"yourTTS",
"speech",
"coqui.ai",
"lb",
"de",
"fr",
"en",
"pt",
"dataset:mbarnig/lb-de-fr-en-pt-12800-TTS-CORPUS",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-07-08T20:42:32Z | ---
license: cc-by-nc-sa-4.0
language:
- lb
- de
- fr
- en
- pt
tags:
- TTS
- audio
- synthesis
- yourTTS
- speech
- coqui.ai
datasets:
- mbarnig/lb-de-fr-en-pt-12800-TTS-CORPUS
---
#### This model has been trained from scratch with my customized dataset [mbarnig/lb-de-fr-en-pt-12800-TTS_CORPUS](https://huggingface.co/datasets/mbarnig/lb-de-fr-en-pt-12800-TTS-CORPUS) and the 🐸 [Coqui-TTS multilingual VITS-model recipe](https://github.com/coqui-ai/TTS/tree/dev/recipes/multilingual/vits_tts) (version 0.7.1). The model was trained without phonemes with the following character-set:
```
characters="abcdefghijklmnopqrstuvwxyz ßàáâãäçèéêëíîïóôõöùúûü",
punctuations="!'(),-.:;? ",
phonemes=None,
```
#### A live inference-demo of the model is available in my HuggingFace space ⌨️ 🇱🇺 🔈 [mbarnig/lb_de_fr_en_pt_COQUI_VITS_TTS](https://huggingface.co/spaces/mbarnig/lb_de_fr_en_pt_COQUI_VITS_TTS).
#### Click the tab *training metrics* above to view the live Tensorboard of the model training.
 |
vinitharaj/distilbert-base-uncased-finetuned-squad2 | vinitharaj | 2022-07-30T05:47:35Z | 5 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-07-29T07:47:14Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: vinitharaj/distilbert-base-uncased-finetuned-squad2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vinitharaj/distilbert-base-uncased-finetuned-squad2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4953
- Validation Loss: 0.3885
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.7037 | 0.4222 | 0 |
| 0.4953 | 0.3885 | 1 |
### Framework versions
- Transformers 4.21.0
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
huggingtweets/dags | huggingtweets | 2022-07-30T01:32:18Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-07-30T01:30:26Z | ---
language: en
thumbnail: http://www.huggingtweets.com/dags/1659144733206/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/722815128501026817/IMWCRzEn_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">DAGs</div>
<div style="text-align: center; font-size: 14px;">@dags</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from DAGs.
| Data | DAGs |
| --- | --- |
| Tweets downloaded | 3003 |
| Retweets | 31 |
| Short tweets | 158 |
| Tweets kept | 2814 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3qyk6uzo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dags's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/18qzuqjb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/18qzuqjb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dags')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
yanaiela/roberta-base-epoch_81 | yanaiela | 2022-07-29T23:09:21Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_81",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T18:04:26Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_81
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 81
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_81.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_79 | yanaiela | 2022-07-29T23:08:37Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_79",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T18:02:16Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_79
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 79
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_79.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_78 | yanaiela | 2022-07-29T23:08:15Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_78",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T18:01:03Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_78
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 78
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_78.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_75 | yanaiela | 2022-07-29T23:07:09Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_75",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:57:46Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_75
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 75
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_75.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_74 | yanaiela | 2022-07-29T23:06:45Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_74",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:56:39Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_74
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 74
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_74.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_71 | yanaiela | 2022-07-29T23:05:36Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_71",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:53:19Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_71
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 71
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_71.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_70 | yanaiela | 2022-07-29T23:05:14Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_70",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:52:21Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_70
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 70
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_70.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_68 | yanaiela | 2022-07-29T23:04:30Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_68",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:50:00Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_68
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 68
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_68.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_67 | yanaiela | 2022-07-29T23:04:02Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_67",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:48:39Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_67
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 67
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_67.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_66 | yanaiela | 2022-07-29T23:03:37Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_66",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:46:45Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_66
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 66
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_66.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_60 | yanaiela | 2022-07-29T23:01:22Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_60",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:36:36Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_60
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 60
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_60.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_58 | yanaiela | 2022-07-29T23:00:40Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_58",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:35:06Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_58
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 58
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_58.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_57 | yanaiela | 2022-07-29T23:00:18Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_57",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:34:22Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_57
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 57
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_57.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_56 | yanaiela | 2022-07-29T22:59:56Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_56",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:33:29Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_56
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 56
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_56.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_54 | yanaiela | 2022-07-29T22:59:09Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_54",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:31:39Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_54
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 54
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_54.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_51 | yanaiela | 2022-07-29T22:57:57Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_51",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:29:17Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_51
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 51
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_51.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_49 | yanaiela | 2022-07-29T22:57:07Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_49",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:27:40Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_49
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 49
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_49.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_47 | yanaiela | 2022-07-29T22:56:19Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_47",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:26:12Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_47
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 47
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_47.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_45 | yanaiela | 2022-07-29T22:55:32Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_45",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:24:44Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_45
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 45
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_45.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_44 | yanaiela | 2022-07-29T22:55:07Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_44",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:24:01Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_44
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 44
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_44.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_42 | yanaiela | 2022-07-29T22:54:14Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_42",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:22:35Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_42
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 42
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_42.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_39 | yanaiela | 2022-07-29T22:53:02Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_39",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:20:23Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_39
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 39
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_39.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_37 | yanaiela | 2022-07-29T22:52:26Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_37",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:18:36Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_37
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 37
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_37.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_36 | yanaiela | 2022-07-29T22:52:02Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_36",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:17:50Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_36
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 36
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_36.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_35 | yanaiela | 2022-07-29T22:51:43Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_35",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:17:09Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_35
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 35
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_35.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_34 | yanaiela | 2022-07-29T22:51:23Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_34",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:16:23Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_34
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 34
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_34.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_33 | yanaiela | 2022-07-29T22:51:06Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_33",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-28T17:15:37Z | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_33
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 33
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_33.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
Subsets and Splits