modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
abdouaziiz/pretraining-wav2vec2.0 | 0c62dbf035da18daf798461507306cde867ce753 | 2022-07-25T16:35:20.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | abdouaziiz | null | abdouaziiz/pretraining-wav2vec2.0 | 7 | null | transformers | 14,700 | Entry not found |
sdadas/st-polish-paraphrase-from-mpnet | 091936fc301009b86fcde596fe86707c84587e8c | 2022-07-25T19:30:56.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | sdadas | null | sdadas/st-polish-paraphrase-from-mpnet | 7 | null | sentence-transformers | 14,701 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sdadas/st-polish-paraphrase-from-mpnet
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sdadas/st-polish-paraphrase-from-mpnet')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sdadas/st-polish-paraphrase-from-mpnet')
model = AutoModel.from_pretrained('sdadas/st-polish-paraphrase-from-mpnet')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sdadas/st-polish-paraphrase-from-mpnet)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
huggingtweets/fireship_dev-hacksultan-prathkum | b38265a861d9ea2f2b1568eb6ab6c14b1a3336fd | 2022-07-25T23:02:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/fireship_dev-hacksultan-prathkum | 7 | null | transformers | 14,702 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1547203581366874113/OW-xVizu_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1451624172266868739/lpi5wPb4_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1436819851566219267/HEffZjvP_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pratham & Name cannot be blank & Fireship</div>
<div style="text-align: center; font-size: 14px;">@fireship_dev-hacksultan-prathkum</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pratham & Name cannot be blank & Fireship.
| Data | Pratham | Name cannot be blank | Fireship |
| --- | --- | --- | --- |
| Tweets downloaded | 3247 | 3242 | 2081 |
| Retweets | 650 | 598 | 721 |
| Short tweets | 252 | 605 | 114 |
| Tweets kept | 2345 | 2039 | 1246 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3rmu05er/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fireship_dev-hacksultan-prathkum's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2qzxq4v7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2qzxq4v7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fireship_dev-hacksultan-prathkum')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
enoriega/rule_learning_1mm_many_negatives_spanpred_margin_avg | fd357316696b483c776356d76aa23556e5e6f9b8 | 2022-07-27T14:45:37.000Z | [
"pytorch",
"tensorboard",
"bert",
"dataset:enoriega/odinsynth_dataset",
"transformers",
"generated_from_trainer",
"model-index"
]
| null | false | enoriega | null | enoriega/rule_learning_1mm_many_negatives_spanpred_margin_avg | 7 | null | transformers | 14,703 | ---
tags:
- generated_from_trainer
datasets:
- enoriega/odinsynth_dataset
model-index:
- name: rule_learning_1mm_many_negatives_spanpred_margin_avg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rule_learning_1mm_many_negatives_spanpred_margin_avg
This model is a fine-tuned version of [enoriega/rule_softmatching](https://huggingface.co/enoriega/rule_softmatching) on the enoriega/odinsynth_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2421
- Margin Accuracy: 0.8897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2000
- total_train_batch_size: 8000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Margin Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|
| 0.3867 | 0.16 | 20 | 0.4023 | 0.8187 |
| 0.3506 | 0.32 | 40 | 0.3381 | 0.8523 |
| 0.3195 | 0.48 | 60 | 0.3096 | 0.8613 |
| 0.3052 | 0.64 | 80 | 0.2957 | 0.8640 |
| 0.2859 | 0.8 | 100 | 0.2922 | 0.8679 |
| 0.297 | 0.96 | 120 | 0.2871 | 0.8688 |
| 0.2717 | 1.12 | 140 | 0.2761 | 0.8732 |
| 0.2671 | 1.28 | 160 | 0.2751 | 0.8743 |
| 0.2677 | 1.44 | 180 | 0.2678 | 0.8757 |
| 0.2693 | 1.6 | 200 | 0.2627 | 0.8771 |
| 0.2675 | 1.76 | 220 | 0.2573 | 0.8813 |
| 0.2732 | 1.92 | 240 | 0.2546 | 0.8858 |
| 0.246 | 2.08 | 260 | 0.2478 | 0.8869 |
| 0.2355 | 2.24 | 280 | 0.2463 | 0.8871 |
| 0.2528 | 2.4 | 300 | 0.2449 | 0.8886 |
| 0.2512 | 2.56 | 320 | 0.2443 | 0.8892 |
| 0.2527 | 2.72 | 340 | 0.2441 | 0.8893 |
| 0.2346 | 2.88 | 360 | 0.2424 | 0.8895 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
JoAmps/distilbert-base-uncased-finetuned-emotion | c23e00c5e2e761903138fbfdaf53e9cc881cc8e5 | 2022-07-26T11:34:39.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | JoAmps | null | JoAmps/distilbert-base-uncased-finetuned-emotion | 7 | null | transformers | 14,704 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9243485103663568
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2256
- Accuracy: 0.9245
- F1: 0.9243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8577 | 1.0 | 250 | 0.3235 | 0.9095 | 0.9064 |
| 0.2562 | 2.0 | 500 | 0.2256 | 0.9245 | 0.9243 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sysresearch101/t5-large-finetuned-xsum | 7dc62521b8aa4cc5d1e6f49b203e6858e6c69319 | 2022-07-27T04:53:53.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | sysresearch101 | null | sysresearch101/t5-large-finetuned-xsum | 7 | null | transformers | 14,705 | Entry not found |
affahrizain/distilbert-base-uncased-mlm-finetuned-imdb | 1264b3ff4b4918e524cbf35c0ef0aa4865b0fbc4 | 2022-07-26T20:44:44.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | affahrizain | null | affahrizain/distilbert-base-uncased-mlm-finetuned-imdb | 7 | null | transformers | 14,706 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-mlm-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-mlm-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7029 | 1.0 | 391 | 0.6418 |
| 0.6557 | 2.0 | 782 | 0.6315 |
| 0.6482 | 3.0 | 1173 | 0.6268 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
olemeyer/zero_shot_issue_classification_bart-large-32-a | db0122fb5c1e91571850b1dc1bf7b4f46af59fe9 | 2022-07-26T23:46:56.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | olemeyer | null | olemeyer/zero_shot_issue_classification_bart-large-32-a | 7 | null | transformers | 14,707 | Entry not found |
eclat12450/fine-tuned-NSPKcBert-v2-10 | c4389b5ab6ca8176e8f75c64c5f46377213be754 | 2022-07-29T03:33:50.000Z | [
"pytorch",
"bert",
"next-sentence-prediction",
"transformers"
]
| null | false | eclat12450 | null | eclat12450/fine-tuned-NSPKcBert-v2-10 | 7 | null | transformers | 14,708 | Entry not found |
nguyenbh/vit-demo | c2367c5bcc87ad9082621208b784294a771e115e | 2022-07-27T23:34:32.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers"
]
| image-classification | false | nguyenbh | null | nguyenbh/vit-demo | 7 | null | transformers | 14,709 | Entry not found |
Junmai/KR-Roberta-large-v1 | 09f85c8b909629a3ab017cc81c3e36264e7acfc8 | 2022-07-28T09:28:17.000Z | [
"pytorch",
"roberta",
"multiple-choice",
"transformers"
]
| multiple-choice | false | Junmai | null | Junmai/KR-Roberta-large-v1 | 7 | null | transformers | 14,710 | Entry not found |
mayank-01/finetuning-sentiment-model-3000-samples | e609afe3ddd0c79a1621c0197172544d9f91cc96 | 2022-07-28T11:10:47.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | mayank-01 | null | mayank-01/finetuning-sentiment-model-3000-samples | 7 | null | transformers | 14,711 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.88
- name: F1
type: f1
value: 0.8831168831168831
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3045
- Accuracy: 0.88
- F1: 0.8831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
yanaiela/roberta-base-epoch_1 | 8dd1326b7449aa3c6ae14ee74f1d8a90a996dae3 | 2022-07-29T22:41:07.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_1",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_1 | 7 | null | transformers | 14,712 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_1
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 1
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_1.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_2 | 66731b9551d5b34dcbb9a96f868c1a060aafd29b | 2022-07-29T22:41:25.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_2",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_2 | 7 | null | transformers | 14,713 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_2
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 2
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_2.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_3 | 355e6f8bab81916249e079be7f55f89d5cfaed65 | 2022-07-29T22:41:43.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_3",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_3 | 7 | null | transformers | 14,714 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_3
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 3
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_3.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_4 | 57c8dca5a2748adc371bda426f04d28fcd3fb00f | 2022-07-29T22:42:03.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_4",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_4 | 7 | null | transformers | 14,715 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_4
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 4
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_4.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_5 | 5021242ed12778e6ed58fb1ef32b1335088dc3df | 2022-07-29T22:42:26.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_5",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_5 | 7 | null | transformers | 14,716 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_5
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 5
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_5.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_6 | c1e332286992744dc880fd3c2326e2b0fc1e43f4 | 2022-07-29T22:42:43.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_6",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_6 | 7 | null | transformers | 14,717 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_6
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 6
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_6.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_10 | cb4be1c7af712dcb595f02a39e6f3a0f554ee2d6 | 2022-07-29T22:43:59.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_10",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_10 | 7 | null | transformers | 14,718 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_10
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 10
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_10.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_11 | 6e445228d52beb07a1bb8d811477d0a5c6526a49 | 2022-07-29T22:44:17.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_11",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_11 | 7 | null | transformers | 14,719 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_11
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 11
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_11.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_14 | 8e81a939250661666b576b815426f40f4ad25624 | 2022-07-29T22:45:11.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_14",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_14 | 7 | null | transformers | 14,720 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_14
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 14
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_14.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_15 | 88bd78ed1c4aa49874ee9c1b318484e5c5b92407 | 2022-07-29T22:45:30.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_15",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_15 | 7 | null | transformers | 14,721 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_15
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 15
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_15.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_16 | 0f759040d18260a1efa632220ca8a3cca9017408 | 2022-07-29T22:45:49.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_16",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_16 | 7 | null | transformers | 14,722 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_16
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 16
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_16.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_17 | 3c06592362e380e31a77031134b46cf514e720af | 2022-07-29T22:46:08.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_17",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_17 | 7 | null | transformers | 14,723 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_17
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 17
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_17.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_18 | e13ee72ba99904ad73cc6757ad57786f5a92243e | 2022-07-29T22:46:26.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_18",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_18 | 7 | null | transformers | 14,724 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_18
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 18
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_18.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_19 | 07ce83a380660996b04a7a46fdfec28ae06a43b0 | 2022-07-29T22:46:46.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_19",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_19 | 7 | null | transformers | 14,725 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_19
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 19
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_19.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_20 | 955181e8ca4200bbe6d5fd278c61eb0d54d044db | 2022-07-29T22:47:06.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_20",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_20 | 7 | null | transformers | 14,726 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_20
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 20
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_20.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_24 | c3e82803379c65f7ead75df3f7727a72b63ab81e | 2022-07-29T22:48:18.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_24",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_24 | 7 | null | transformers | 14,727 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_24
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 24
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_24.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_27 | d257b2730ca267be90b008f26e1e43350b550c77 | 2022-07-29T22:49:16.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_27",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_27 | 7 | null | transformers | 14,728 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_27
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 27
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_27.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_28 | 53d4cd2df4930ada4e5221cb4f57570bf72704a0 | 2022-07-29T22:49:33.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_28",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_28 | 7 | null | transformers | 14,729 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_28
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 28
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_28.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_30 | c09f9481fbe5db416d890b9abe1670dd4ac70472 | 2022-07-29T22:50:11.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_30",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_30 | 7 | null | transformers | 14,730 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_30
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 30
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_30.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_36 | 6ae65571f12df5c22e6240eb35e1e6554e07b647 | 2022-07-29T22:52:02.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_36",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_36 | 7 | null | transformers | 14,731 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_36
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 36
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_36.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_37 | b686192ce690730bed2dc3b94c567cf7cd056b05 | 2022-07-29T22:52:26.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_37",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_37 | 7 | null | transformers | 14,732 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_37
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 37
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_37.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_40 | f519fef8374477cc374b879526e3b552667c2ab3 | 2022-07-29T22:53:26.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_40",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_40 | 7 | null | transformers | 14,733 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_40
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 40
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_40.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_46 | f4fa41e5fc26add2b51e4a0b29a178ab683e8e3c | 2022-07-29T22:55:54.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_46",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_46 | 7 | null | transformers | 14,734 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_46
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 46
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_46.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_52 | 0dcdfb9177d0e3e3fe737310ec11342ee2f37682 | 2022-07-29T22:58:22.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_52",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_52 | 7 | null | transformers | 14,735 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_52
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 52
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_52.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_54 | de72ccb4af0d5ad0d62a61addb4d528b5f929290 | 2022-07-29T22:59:09.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_54",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_54 | 7 | null | transformers | 14,736 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_54
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 54
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_54.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_65 | c4692122bca4c09a067688e3f75711aef03ee877 | 2022-07-29T23:03:14.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_65",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_65 | 7 | null | transformers | 14,737 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_65
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 65
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_65.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_71 | 91cf97511a003e4eb56fd96239fd3d767eb6aaa3 | 2022-07-29T23:05:36.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_71",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_71 | 7 | null | transformers | 14,738 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_71
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 71
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_71.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_72 | 23257d36c0870a9a20db58f6247f2b9aa039d01b | 2022-07-29T23:05:57.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_72",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_72 | 7 | null | transformers | 14,739 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_72
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 72
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_72.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
huggingtweets/ottorothmund | 94fe2feed89d4e7afc1d4e33e7726d1b160b69e2 | 2022-07-29T21:17:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/ottorothmund | 7 | null | transformers | 14,740 | ---
language: en
thumbnail: http://www.huggingtweets.com/ottorothmund/1659129434156/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1538410659125264386/l3bzndr9_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Otto☀︎Rothmund</div>
<div style="text-align: center; font-size: 14px;">@ottorothmund</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Otto☀︎Rothmund.
| Data | Otto☀︎Rothmund |
| --- | --- |
| Tweets downloaded | 3196 |
| Retweets | 643 |
| Short tweets | 818 |
| Tweets kept | 1735 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/799fj72y/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ottorothmund's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3auf5ulc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3auf5ulc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ottorothmund')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
arvkevi/output | 2db7d01dabab9ebd5d26b8812244fa825b696aab | 2022-07-29T02:49:09.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-generation | false | arvkevi | null | arvkevi/output | 7 | null | transformers | 14,741 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sundaresanms96/finetuning-sentiment-bert-base-cased | 2cce7cba123c90b3dceda932c1540f60a46d89e0 | 2022-07-29T04:27:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | sundaresanms96 | null | sundaresanms96/finetuning-sentiment-bert-base-cased | 7 | null | transformers | 14,742 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-bert-base-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-bert-base-cased
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3414
- Accuracy: 0.8846
- F1: 0.9176
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
abdouaziiz/pretraining2 | 4d5081b35a28b4d5a29fc010ef4532e6e92bba69 | 2022-07-29T12:11:32.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"transformers"
]
| feature-extraction | false | abdouaziiz | null | abdouaziiz/pretraining2 | 7 | null | transformers | 14,743 | Entry not found |
alfredcs/vit-cifar-10-2 | efba011fa8fe039476a975a7db723ceb13566db2 | 2022-07-29T21:44:22.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers",
"license:apache-2.0"
]
| image-classification | false | alfredcs | null | alfredcs/vit-cifar-10-2 | 7 | null | transformers | 14,744 | ---
license: apache-2.0
---
|
AIDA-UPM/MSTSb_paraphrase-multilingual-MiniLM-L12-v2 | 28fc8ed60064fd7984e0feebafec601426ce14cc | 2021-07-21T18:02:53.000Z | [
"pytorch",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | AIDA-UPM | null | AIDA-UPM/MSTSb_paraphrase-multilingual-MiniLM-L12-v2 | 6 | null | sentence-transformers | 14,745 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1438 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 2,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 288,
"weight_decay": 0.05
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1 | fc4842f09face5ffc4fca4fa56f2651991023b1e | 2021-11-07T08:25:22.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | AIDA-UPM | null | AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1 | 6 | null | sentence-transformers | 14,746 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1438 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 2,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 4e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 288,
"weight_decay": 0.1
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Aimendo/autonlp-triage-35248482 | 528497d7ac1681046865517f84c8792e878da274 | 2021-11-23T08:03:14.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:Aimendo/autonlp-data-triage",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | Aimendo | null | Aimendo/autonlp-triage-35248482 | 6 | null | transformers | 14,747 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Aimendo/autonlp-data-triage
co2_eq_emissions: 7.989144645413398
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 35248482
- CO2 Emissions (in grams): 7.989144645413398
## Validation Metrics
- Loss: 0.13783401250839233
- Accuracy: 0.9728654124457308
- Macro F1: 0.949537871674076
- Micro F1: 0.9728654124457308
- Weighted F1: 0.9732422812610365
- Macro Precision: 0.9380372699332605
- Micro Precision: 0.9728654124457308
- Weighted Precision: 0.974548513256663
- Macro Recall: 0.9689346153591594
- Micro Recall: 0.9728654124457308
- Weighted Recall: 0.9728654124457308
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Aimendo/autonlp-triage-35248482
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Aimendo/autonlp-triage-35248482", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Aimendo/autonlp-triage-35248482", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
Aleksandar/electra-srb-ner | 4b51dbcba35be4d5c1eac751f977d4985832c1b7 | 2021-09-07T22:28:30.000Z | [
"pytorch",
"electra",
"token-classification",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| token-classification | false | Aleksandar | null | Aleksandar/electra-srb-ner | 6 | null | transformers | 14,748 | ---
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: electra-srb-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: sr
metric:
name: Accuracy
type: accuracy
value: 0.9568394937134688
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-srb-ner
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3406
- Precision: 0.8934
- Recall: 0.9087
- F1: 0.9010
- Accuracy: 0.9568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3686 | 1.0 | 625 | 0.2108 | 0.8326 | 0.8494 | 0.8409 | 0.9335 |
| 0.1886 | 2.0 | 1250 | 0.1784 | 0.8737 | 0.8713 | 0.8725 | 0.9456 |
| 0.1323 | 3.0 | 1875 | 0.1805 | 0.8654 | 0.8870 | 0.8760 | 0.9468 |
| 0.0675 | 4.0 | 2500 | 0.2018 | 0.8736 | 0.8880 | 0.8807 | 0.9502 |
| 0.0425 | 5.0 | 3125 | 0.2162 | 0.8818 | 0.8945 | 0.8881 | 0.9512 |
| 0.0343 | 6.0 | 3750 | 0.2492 | 0.8790 | 0.8928 | 0.8859 | 0.9513 |
| 0.0253 | 7.0 | 4375 | 0.2562 | 0.8821 | 0.9006 | 0.8912 | 0.9525 |
| 0.0142 | 8.0 | 5000 | 0.2788 | 0.8807 | 0.9013 | 0.8909 | 0.9524 |
| 0.0114 | 9.0 | 5625 | 0.2793 | 0.8861 | 0.9002 | 0.8931 | 0.9534 |
| 0.0095 | 10.0 | 6250 | 0.2967 | 0.8887 | 0.9034 | 0.8960 | 0.9550 |
| 0.008 | 11.0 | 6875 | 0.2993 | 0.8899 | 0.9067 | 0.8982 | 0.9556 |
| 0.0048 | 12.0 | 7500 | 0.3215 | 0.8887 | 0.9038 | 0.8962 | 0.9545 |
| 0.0034 | 13.0 | 8125 | 0.3242 | 0.8897 | 0.9068 | 0.8982 | 0.9554 |
| 0.003 | 14.0 | 8750 | 0.3311 | 0.8884 | 0.9085 | 0.8983 | 0.9559 |
| 0.0025 | 15.0 | 9375 | 0.3383 | 0.8943 | 0.9062 | 0.9002 | 0.9562 |
| 0.0011 | 16.0 | 10000 | 0.3346 | 0.8941 | 0.9112 | 0.9026 | 0.9574 |
| 0.0015 | 17.0 | 10625 | 0.3362 | 0.8944 | 0.9081 | 0.9012 | 0.9567 |
| 0.001 | 18.0 | 11250 | 0.3464 | 0.8877 | 0.9100 | 0.8987 | 0.9559 |
| 0.0012 | 19.0 | 11875 | 0.3415 | 0.8944 | 0.9089 | 0.9016 | 0.9568 |
| 0.0005 | 20.0 | 12500 | 0.3406 | 0.8934 | 0.9087 | 0.9010 | 0.9568 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
|
Alireza1044/albert-base-v2-qnli | 69a25402725c8a1bfac50d24d3cbd0e779918061 | 2021-07-27T15:39:56.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | false | Alireza1044 | null | Alireza1044/albert-base-v2-qnli | 6 | null | transformers | 14,749 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model_index:
- name: qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
args: qnli
metric:
name: Accuracy
type: accuracy
value: 0.9137836353651839
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qnli
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3608
- Accuracy: 0.9138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
AlirezaBaneshi/testPersianQA | b9da3f9d0312318991945208d445a20fce7a4ccd | 2022-02-01T13:39:46.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | AlirezaBaneshi | null | AlirezaBaneshi/testPersianQA | 6 | null | transformers | 14,750 | Entry not found |
AnjanBiswas/distilbert-base-uncased-finetuned-emotion | 33e597c5cb4a1c2046b7741056a9497105685e92 | 2022-01-31T23:42:22.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | AnjanBiswas | null | AnjanBiswas/distilbert-base-uncased-finetuned-emotion | 6 | null | transformers | 14,751 | Entry not found |
AnonymousSub/bert-base-uncased_wikiqa | 43a114c09759a600368db50d5094ea587c3c68d9 | 2022-01-22T20:38:57.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/bert-base-uncased_wikiqa | 6 | null | transformers | 14,752 | Entry not found |
AnonymousSub/consert-s10-AR | 3c2b8a30b19b8e74e1858fed20ced000c42e3770 | 2021-10-03T05:06:10.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/consert-s10-AR | 6 | null | transformers | 14,753 | Entry not found |
AnonymousSub/dummy_2 | 38af22d8f06ff98662dc03dedc611f445857bfa7 | 2021-11-03T07:34:41.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/dummy_2 | 6 | null | transformers | 14,754 | Entry not found |
AnonymousSub/hier_triplet_epochs_1_shard_1 | 2a60119e45b3a6e5cd3ea5e03590e9000b6c8db6 | 2021-12-22T16:56:47.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | AnonymousSub | null | AnonymousSub/hier_triplet_epochs_1_shard_1 | 6 | null | transformers | 14,755 | Entry not found |
AnonymousSub/rule_based_hier_triplet_epochs_1_shard_10 | 34dd9cc580038f48ee95bebe45ec7d6d8bc0ce4e | 2022-01-04T08:18:54.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_hier_triplet_epochs_1_shard_10 | 6 | null | transformers | 14,756 | Entry not found |
AnonymousSub/rule_based_roberta_twostage_quadruplet_epochs_1_shard_1_wikiqa | e1f6c2991af087ba5a3285b7c49961781447c26c | 2022-01-23T08:46:33.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_twostage_quadruplet_epochs_1_shard_1_wikiqa | 6 | null | transformers | 14,757 | Entry not found |
AnonymousSub/specter-bert-model_copy_wikiqa | dad9825780bf7781cb6fb67ebacf92feaa1d92cc | 2022-01-23T06:47:13.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/specter-bert-model_copy_wikiqa | 6 | null | transformers | 14,758 | Entry not found |
Anthos23/my-awesome-model | 596590fd6afbef20c4a6c557c8f372d3f6eb17f3 | 2022-02-23T11:09:10.000Z | [
"pytorch",
"tf",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Anthos23 | null | Anthos23/my-awesome-model | 6 | null | transformers | 14,759 | Entry not found |
Apoorva/k2t-test | cb1c4e229319e7474f06eec357e8c7e392ae7293 | 2022-01-06T19:38:56.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"transformers",
"keytotext",
"k2t",
"Keywords to Sentences",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | Apoorva | null | Apoorva/k2t-test | 6 | null | transformers | 14,760 | ---
language: "en"
thumbnail: "Keywords to Sentences"
tags:
- keytotext
- k2t
- Keywords to Sentences
model-index:
- name: k2t-test
---
Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
Potential use case can include:
- Marketing
- Search Engine Optimization
- Topic generation etc.
- Fine tuning of topic modeling models |
ArtemisZealot/DialoGTP-small-Qkarin | a56be3b66c7c316dd9415223453bf105dfb82cc7 | 2021-09-05T17:58:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | ArtemisZealot | null | ArtemisZealot/DialoGTP-small-Qkarin | 6 | null | transformers | 14,761 | ---
tags:
- conversational
---
#Okarin Bot |
Ayran/DialoGPT-small-gandalf | 736ddce565170d127d421e6eccaf232d71b30aa2 | 2021-09-10T17:27:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Ayran | null | Ayran/DialoGPT-small-gandalf | 6 | null | transformers | 14,762 | ---
tags:
- conversational
---
# Gandalf DialoGPT Model |
Azaghast/GPT2-SCP-ContainmentProcedures | facbe5cc1a5b12f3b4ae59474c3a026ab7a91dc9 | 2022-06-08T23:37:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | Azaghast | null | Azaghast/GPT2-SCP-ContainmentProcedures | 6 | null | transformers | 14,763 | Entry not found |
BigSalmon/ParaphraseParentheses | 710ccb1964704511e341e033b71f10502d94c75e | 2021-12-13T00:11:39.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | BigSalmon | null | BigSalmon/ParaphraseParentheses | 6 | null | transformers | 14,764 | This can be used to paraphrase. I recommend using the code I have attached below. You can generate it without using LogProbs, but you are likely to be best served by manually examining the most likely outputs.
If this interests you, check out https://huggingface.co/BigSalmon/MrLincoln12 or my other MrLincoln repos.
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/Parentheses")
```
Example Prompt:
```
***
The Milwaukee Bucks are for sure in title contention.
The Milwaukee Bucks are ( virtually assured / all but certain to / on the cusp of / well positioned for ) title contention.
***
Discord is an up-and-coming platform, attracting people from all walks of life.
Discord is ( an / a ) ( up-and-coming platform / platform in the ascendant / medium on the rise ), ( drawing in / wooing / winning over ) ( people / individuals / consumers / audiences ) from all ( walks of life / corners of the universe / horizons )...
***
HuggingFace is an amazing company.
HuggingFace is an (
```
```
import torch
prompt = "Insert Your Prompt Here. It is Best To Have a Few Examples Before Like The Example Prompt Shows."
text = tokenizer.encode(prompt)
myinput, past_key_values = torch.tensor([text]), None
myinput = myinput
myinput= myinput.to(device)
logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(500)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
words = []
for i in range(500):
m = ([best_words[i]])
m = str(m)
m = m.replace("[' ", "").replace("']", "")
print(m)
``` |
Blaine-Mason/hackMIT-finetuned-sst2 | 9aeeeb2b57a12d0277cb5fdf4825791caf893886 | 2021-08-25T00:31:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer"
]
| text-classification | false | Blaine-Mason | null | Blaine-Mason/hackMIT-finetuned-sst2 | 6 | null | transformers | 14,765 | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model_index:
- name: hackMIT-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metric:
name: Accuracy
type: accuracy
value: 0.8027522935779816
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hackMIT-finetuned-sst2
This model is a fine-tuned version of [Blaine-Mason/hackMIT-finetuned-sst2](https://huggingface.co/Blaine-Mason/hackMIT-finetuned-sst2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1086
- Accuracy: 0.8028
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.033238621168611e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 30
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0674 | 1.0 | 4210 | 1.1086 | 0.8028 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
CLAck/en-vi | e0bcfb55e1c8489d0e11fdfe4208654dd767e64e | 2022-02-15T11:28:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"vi",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | CLAck | null | CLAck/en-vi | 6 | null | transformers | 14,766 | ---
language:
- en
- vi
tags:
- translation
license: apache-2.0
datasets:
- ALT
metrics:
- sacrebleu
---
This is a finetuning of a MarianMT pretrained on English-Chinese. The target language pair is English-Vietnamese.
The first phase of training (mixed) is performed on a dataset containing both English-Chinese and English-Vietnamese sentences.
The second phase of training (pure) is performed on a dataset containing only English-Vietnamese sentences.
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/en-vi")
tokenizer = AutoTokenizer.from_pretrained("CLAck/en-vi")
# Download a tokenizer that can tokenize English since the model Tokenizer doesn't know anymore how to do it
# We used the one coming from the initial model
# This tokenizer is used to tokenize the input sentence
tokenizer_en = AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-zh')
# These special tokens are needed to reproduce the original tokenizer
tokenizer_en.add_tokens(["<2zh>", "<2vi>"], special_tokens=True)
sentence = "The cat is on the table"
# This token is needed to identify the target language
input_sentence = "<2vi> " + sentence
translated = model.generate(**tokenizer_en(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
```
### Training results
MIXED
| Epoch | Bleu |
|:-----:|:-------:|
| 1.0 | 26.2407 |
| 2.0 | 32.6016 |
| 3.0 | 35.4060 |
| 4.0 | 36.6737 |
| 5.0 | 37.3774 |
PURE
| Epoch | Bleu |
|:-----:|:-------:|
| 1.0 | 37.3169 |
| 2.0 | 37.4407 |
| 3.0 | 37.6696 |
| 4.0 | 37.8765 |
| 5.0 | 38.0105 |
|
CLTL/icf-levels-fac | ae01ad6e59c75306d3540c9e5801d90f2fd19f26 | 2021-11-08T12:13:55.000Z | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | false | CLTL | null | CLTL/icf-levels-fac | 6 | 1 | transformers | 14,767 | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Walking Functioning Levels (ICF d550)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing walking functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about walking functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
5 | Patient can walk independently anywhere: level surface, uneven surface, slopes, stairs.
4 | Patient can walk independently on level surface but requires help on stairs, inclines, uneven surface; or, patient can walk independently, but the walking is not fully normal.
3 | Patient requires verbal supervision for walking, without physical contact.
2 | Patient needs continuous or intermittent support of one person to help with balance and coordination.
1 | Patient needs firm continuous support from one person who helps carrying weight and with balance.
0 | Patient cannot walk or needs help from two or more people; or, patient walks on a treadmill.
The predictions generated by the model might sometimes be outside of the scale (e.g. 5.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-fac',
use_cuda=False,
)
example = 'kan nog goed traplopen, maar flink ingeleverd aan conditie na Corona'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
4.2
```
The raw outputs look like this:
```
[[4.20903111]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.70 | 0.66
mean squared error | 0.91 | 0.93
root mean squared error | 0.95 | 0.96
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
CLTL/icf-levels-stm | a7a43451a2f7f1eddf758f7d98ec36d853a7eaed | 2021-11-08T12:26:53.000Z | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | false | CLTL | null | CLTL/icf-levels-stm | 6 | 1 | transformers | 14,768 | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Emotional Functioning Levels (ICF b152)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing emotional functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about emotional functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | No problem with emotional functioning: emotions are appropriate, well regulated, etc.
3 | Slight problem with emotional functioning: irritable, gloomy, etc.
2 | Moderate problem with emotional functioning: negative emotions, such as fear, anger, sadness, etc.
1 | Severe problem with emotional functioning: intense negative emotions, such as fear, anger, sadness, etc.
0 | Flat affect, apathy, unstable, inappropriate emotions.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-stm',
use_cuda=False,
)
example = 'Naarmate het somatische beeld een herstellende trend laat zien, valt op dat patient zich depressief en suicidaal uit.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
1.60
```
The raw outputs look like this:
```
[[1.60418844]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.76 | 0.68
mean squared error | 1.03 | 0.87
root mean squared error | 1.01 | 0.93
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
Cameron/BERT-eec-emotion | 2fc079c57da8b927913e56c6203428f8040d8fc0 | 2021-05-18T17:25:51.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Cameron | null | Cameron/BERT-eec-emotion | 6 | null | transformers | 14,769 | Entry not found |
CenIA/albert-base-spanish-finetuned-pawsx | 21a13b010f5723a8b05f42f74fa86e03a6a108e7 | 2022-01-02T00:14:18.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | CenIA | null | CenIA/albert-base-spanish-finetuned-pawsx | 6 | null | transformers | 14,770 | Entry not found |
CenIA/albert-tiny-spanish-finetuned-pawsx | 6d127499bc95d590196130e706fa9e5bf9544336 | 2022-01-01T18:21:54.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | CenIA | null | CenIA/albert-tiny-spanish-finetuned-pawsx | 6 | null | transformers | 14,771 | Entry not found |
CenIA/bert-base-spanish-wwm-cased-finetuned-mldoc | e73c9975ad2e376ad952a967eb9897aeb94b6da7 | 2022-01-10T21:30:12.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | CenIA | null | CenIA/bert-base-spanish-wwm-cased-finetuned-mldoc | 6 | null | transformers | 14,772 | Entry not found |
CenIA/bert-base-spanish-wwm-cased-finetuned-xnli | bfc5be44427d4bfb45e6d0a169dde6e4d9a512af | 2021-12-08T21:52:32.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | CenIA | null | CenIA/bert-base-spanish-wwm-cased-finetuned-xnli | 6 | null | transformers | 14,773 | Entry not found |
CenIA/bert-base-spanish-wwm-uncased-finetuned-mldoc | 08d2dcda7bee9c05ab74e236209aad111e7f1c21 | 2022-01-11T03:27:07.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | CenIA | null | CenIA/bert-base-spanish-wwm-uncased-finetuned-mldoc | 6 | null | transformers | 14,774 | Entry not found |
CenIA/bert-base-spanish-wwm-uncased-finetuned-xnli | d81d8f24d0d3a199f2452d4f5ad2f04d8b2c6f71 | 2021-12-08T22:03:22.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | CenIA | null | CenIA/bert-base-spanish-wwm-uncased-finetuned-xnli | 6 | null | transformers | 14,775 | Entry not found |
CenIA/distillbert-base-spanish-uncased-finetuned-mldoc | 17f1dd80d1041b0d8d04f86f86d19957f3764c75 | 2022-01-11T04:07:40.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | CenIA | null | CenIA/distillbert-base-spanish-uncased-finetuned-mldoc | 6 | null | transformers | 14,776 | Entry not found |
CenIA/distillbert-base-spanish-uncased-finetuned-pos | 9a86d59ec9dbb76a25920aa6a6947ee3856e01a2 | 2021-12-18T00:48:24.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | CenIA | null | CenIA/distillbert-base-spanish-uncased-finetuned-pos | 6 | null | transformers | 14,777 | Entry not found |
Chakita/KannadaBERT | d6cbebccfd265cd7ed3cd2be0c85c36cd10e4d0e | 2021-07-17T05:37:44.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"masked-lm",
"fill-in-the-blanks",
"autotrain_compatible"
]
| fill-mask | false | Chakita | null | Chakita/KannadaBERT | 6 | null | transformers | 14,778 | ---
tags:
- masked-lm
- fill-in-the-blanks
---
RoBERTa model trained on OSCAR Kannada corpus. |
Chun/w-en2zh-otm | 7e4aa3276f77d80b9d572222e1ced3d28cf55137 | 2021-08-25T02:01:07.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Chun | null | Chun/w-en2zh-otm | 6 | null | transformers | 14,779 | Entry not found |
CleveGreen/FieldClassifier | d0b28b2b3d74a334a249a9454610a6520141a0c3 | 2021-08-17T17:43:35.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | CleveGreen | null | CleveGreen/FieldClassifier | 6 | null | transformers | 14,780 | Entry not found |
CodeNinja1126/xlm-roberta-large-kor-mrc | d5fbc13e9f8f4273002afe4d3159c641158b1153 | 2021-05-19T06:11:31.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | CodeNinja1126 | null | CodeNinja1126/xlm-roberta-large-kor-mrc | 6 | null | transformers | 14,781 | Entry not found |
Corvus/DialoGPT-medium-CaptainPrice | d26bc9fa8969efd15f3af399c94d2006b38d20f7 | 2021-09-20T10:17:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Corvus | null | Corvus/DialoGPT-medium-CaptainPrice | 6 | null | transformers | 14,782 | ---
tags:
- conversational
---
# Captain Price DialoGPT Model |
CouchCat/ma_ner_v6_distil | f617320068e97c211f70b94a53c9235c2cc1b6eb | 2021-02-15T23:32:46.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"transformers",
"ner",
"license:mit",
"autotrain_compatible"
]
| token-classification | false | CouchCat | null | CouchCat/ma_ner_v6_distil | 6 | null | transformers | 14,783 | ---
language: en
license: mit
tags:
- ner
widget:
- text: "These shoes from Adidas fit quite well"
---
### Description
A Named Entity Recognition model trained on a customer feedback data using DistilBert.
Possible labels are:
- PRD: for certain products
- BRND: for brands
### Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_ner_v6_distil")
model = AutoModelForTokenClassification.from_pretrained("CouchCat/ma_ner_v6_distil")
``` |
CracklesCreeper/Piglin-Talks-Harry-Potter | cf2991534b22e9b502d43261e7a674546170a3c4 | 2021-08-28T08:46:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | CracklesCreeper | null | CracklesCreeper/Piglin-Talks-Harry-Potter | 6 | null | transformers | 14,784 | ---
tags:
- conversational
---
@Piglin Talks Harry Potter |
Craig/paraphrase-MiniLM-L6-v2 | 7f1c6a384b3f870bdefcb13114cd8a9d2fd06dd1 | 2021-10-13T15:01:15.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
]
| feature-extraction | false | Craig | null | Craig/paraphrase-MiniLM-L6-v2 | 6 | null | sentence-transformers | 14,785 | ---
pipeline_tag: feature-extraction
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/paraphrase-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This is a clone of the original model, with `pipeline_tag` metadata changed to `feature-extraction`, so it can just return the embedded vector. Otherwise it is unchanged.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-MiniLM-L6-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
Cthyllax/DialoGPT-medium-PaladinDanse | a619aacbda5d86066008cb36a508938765dbce7e | 2021-09-11T20:48:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Cthyllax | null | Cthyllax/DialoGPT-medium-PaladinDanse | 6 | null | transformers | 14,786 | ---
tags:
- conversational
---
# Paladin Danse DialoGPT Model |
Davlan/mt5_base_yor_eng_mt | 82538184f9fc49759f3e0e6dd8f1f8cdde2909b9 | 2021-06-23T14:51:23.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"yo",
"en",
"dataset:JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Davlan | null | Davlan/mt5_base_yor_eng_mt | 6 | null | transformers | 14,787 | Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mT5_base_yor_eng_mt
## Model description
**mT5_base_yor_eng_mt** is a **machine translation** model from Yorùbá language to English language based on a fine-tuned mT5-base model. It establishes a **strong baseline** for automatically translating texts from Yorùbá to English.
Specifically, this model is a *mT5_base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for MT.
```python
from transformers import MT5ForConditionalGeneration, T5Tokenizer
model = MT5ForConditionalGeneration.from_pretrained("Davlan/mt5_base_yor_eng_mt")
tokenizer = T5Tokenizer.from_pretrained("google/mt5-base")
input_string = "Akọni ajìjàgbara obìnrin tó sun àtìmalé torí owó orí"
inputs = tokenizer.encode(input_string, return_tensors="pt")
generated_tokens = model.generate(inputs)
results = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
15.57 BLEU on [Menyo-20k test set](https://arxiv.org/abs/2103.08647)
### BibTeX entry and citation info
By David Adelani
```
```
|
DeepPavlov/marianmt-tatoeba-ruen | d9754a73a104462dc1345e0f26b75ed1e82ec041 | 2021-12-20T11:56:37.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | DeepPavlov | null | DeepPavlov/marianmt-tatoeba-ruen | 6 | null | transformers | 14,788 | Entry not found |
DongHyoungLee/distilbert-base-uncased-finetuned-cola | 3871872331915aa2d65391d097753a16de3e19cb | 2021-10-16T11:30:42.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | DongHyoungLee | null | DongHyoungLee/distilbert-base-uncased-finetuned-cola | 6 | null | transformers | 14,789 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.535587402888147
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7335
- Matthews Correlation: 0.5356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5309 | 1.0 | 535 | 0.5070 | 0.4239 |
| 0.3568 | 2.0 | 1070 | 0.5132 | 0.4913 |
| 0.24 | 3.0 | 1605 | 0.6081 | 0.4990 |
| 0.1781 | 4.0 | 2140 | 0.7335 | 0.5356 |
| 0.1243 | 5.0 | 2675 | 0.8705 | 0.5242 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Doogie/Waynehills-STT-doogie-server | 817db5ade1395b4e7b4e0fadf43cbf41ac1006c0 | 2022-01-06T17:18:49.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Doogie | null | Doogie/Waynehills-STT-doogie-server | 6 | null | transformers | 14,790 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: Waynehills-STT-doogie-server
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Waynehills-STT-doogie-server
This model is a fine-tuned version of [Doogie/Waynehills-STT-doogie-server](https://huggingface.co/Doogie/Waynehills-STT-doogie-server) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
DoyyingFace/bert-COVID-HATE-finetuned-test | 4880150711c2b46757ba621344de3e6991300ab5 | 2022-02-10T02:42:21.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | DoyyingFace | null | DoyyingFace/bert-COVID-HATE-finetuned-test | 6 | null | transformers | 14,791 | Entry not found |
DoyyingFace/bert-cola-finetuned | bf99da87b705f17ecab36f87f94b0f708083eafd | 2022-01-22T03:26:38.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | DoyyingFace | null | DoyyingFace/bert-cola-finetuned | 6 | null | transformers | 14,792 | Entry not found |
DoyyingFace/bert-wiki-comments-finetuned | 1a297ae6489682c5eded951744afbeb57fe904f8 | 2022-01-23T17:26:54.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | DoyyingFace | null | DoyyingFace/bert-wiki-comments-finetuned | 6 | null | transformers | 14,793 | Entry not found |
ELiRF/NASCA | eaf1361bb43fc73b21d40feb13085efb0d897871 | 2022-04-29T09:18:12.000Z | [
"pytorch",
"bart",
"text2text-generation",
"ca",
"transformers",
"summarization",
"autotrain_compatible"
]
| summarization | false | ELiRF | null | ELiRF/NASCA | 6 | null | transformers | 14,794 | ---
language: ca
tags:
- summarization
widget:
- text: "La Universitat Politècnica de València (UPV), a través del projecte Atenea “plataforma de dones, art i tecnologia” i en col·laboració amb les companyies tecnològiques Metric Salad i Zetalab, ha digitalitzat i modelat en 3D per a la 35a edició del Festival Dansa València, que se celebra del 2 al 10 d'abril, la primera peça de dansa en un metaverso específic. La peça No és amor, dirigida per Lara Misó, forma part de la programació d'aquesta edició del Festival Dansa València i explora la figura geomètrica del cercle des de totes les seues perspectives: espacial, corporal i compositiva. No és amor està inspirada en el treball de l'artista japonesa Yayoi Kusama i mira de prop les diferents facetes d'una obsessió. Així dona cabuda a la insistència, la repetició, el trastorn, la hipnosi i l'alliberament. El procés de digitalització, materialitzat per Metric Salad i ZetaLab, ha sigut complex respecte a uns altres ja realitzats a causa de l'enorme desafiament que comporta el modelatge en 3D de cossos en moviment al ritme de la composició de l'obra. L'objectiu era generar una experiència el més realista possible i fidedigna de l'original perquè el resultat final fora un procés absolutament immersiu.Així, el metaverso està compost per figures modelades en 3D al costat de quatre projeccions digitalitzades en pantalles flotants amb les quals l'usuari podrà interactuar segons es vaja acostant, bé mitjançant els comandaments de l'ordinador, bé a través d'ulleres de realitat virtual. L'objectiu és que quan l'usuari s'acoste a cadascuna de les projeccions tinga la sensació d'una immersió quasi completa en fondre's amb el contingut audiovisual que li genere una experiència intimista i molt real."
---
**IMPORTANT:** On the 5th of April 2022, we detected a mistake in the configuration file; thus, the model was not generating the summaries correctly, and it was underperforming in all scenarios. For this reason, if you had used the model until that day, we would be glad if you would re-evaluate the model if you are publishing some results with it. We apologize for the inconvenience and thank you for your understanding.
# NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish
Most of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual models, their performance is typically lower, especially for minority languages like Catalan. In this paper, we present a monolingual model for abstractive summarization of textual content in the Catalan language. The model is a Transformer encoder-decoder which is pretrained and fine-tuned specifically for the Catalan language using a corpus of newspaper articles. In the pretraining phase, we introduced several self-supervised tasks to specialize the model on the summarization task and to increase the abstractivity of the generated summaries. To study the performance of our proposal in languages with higher resources than Catalan, we replicate the model and the experimentation for the Spanish language. The usual evaluation metrics, not only the most used ROUGE measure but also other more semantic ones such as BertScore, do not allow to correctly evaluate the abstractivity of the generated summaries. In this work, we also present a new metric, called content reordering, to evaluate one of the most common characteristics of abstractive summaries, the rearrangement of the original content. We carried out an exhaustive experimentation to compare the performance of the monolingual models proposed in this work with two of the most widely used multilingual models in text summarization, mBART and mT5. The experimentation results support the quality of our monolingual models, especially considering that the multilingual models were pretrained with many more resources than those used in our models. Likewise, it is shown that the pretraining tasks helped to increase the degree of abstractivity of the generated summaries. To our knowledge, this is the first work that explores a monolingual approach for abstractive summarization both in Catalan and Spanish.
# The NASca model
News Abstractive Summarization for Catalan (NASca) is a Transformer encoder-decoder model, with the same hyper-parameters than BART, to perform summarization of Catalan news articles. It is pre-trained on a combination of several self-supervised tasks that help to increase the abstractivity of the generated summaries. Four pre-training tasks have been combined: sentence permutation, text infilling, Gap Sentence Generation, and Next Segment Generation. Catalan newspapers, the Catalan subset of the OSCAR corpus and Wikipedia articles in Catalan were used for pre-training the model (9.3GB of raw text -2.5 millions of documents-).
NASca is finetuned for the summarization task on 636.596 (document, summary) pairs from the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA).
### BibTeX entry
```bibtex
@Article{app11219872,
AUTHOR = {Ahuir, Vicent and Hurtado, Lluís-F. and González, José Ángel and Segarra, Encarna},
TITLE = {NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish},
JOURNAL = {Applied Sciences},
VOLUME = {11},
YEAR = {2021},
NUMBER = {21},
ARTICLE-NUMBER = {9872},
URL = {https://www.mdpi.com/2076-3417/11/21/9872},
ISSN = {2076-3417},
DOI = {10.3390/app11219872}
}
``` |
EMBEDDIA/bertic-tweetsentiment | 98551938159c6e7d0776c20fdd7baf79427253c1 | 2021-07-09T14:34:25.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | EMBEDDIA | null | EMBEDDIA/bertic-tweetsentiment | 6 | null | transformers | 14,795 | Entry not found |
EMBO/sd-panelization | c476892a1212dd44da88d3b3126a933205b7ab11 | 2022-03-27T13:21:35.000Z | [
"pytorch",
"jax",
"roberta",
"token-classification",
"english",
"dataset:EMBO/sd-nlp",
"transformers",
"license:agpl-3.0",
"autotrain_compatible"
]
| token-classification | false | EMBO | null | EMBO/sd-panelization | 6 | null | transformers | 14,796 | ---
language:
- english
thumbnail:
tags:
- token classification
-
license: agpl-3.0
datasets:
- EMBO/sd-nlp
metrics:
-
---
# sd-panelization
## Model description
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It was then fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `PANELIZATION` task to perform 'parsing' or 'segmentation' of figure legends into fragments corresponding to sub-panels.
Figures are usually composite representations of results obtained with heterogeneous experimental approaches and systems. Breaking figures into panels allows identifying more coherent descriptions of individual scientific experiments.
## Intended uses & limitations
#### How to use
The intended use of this model is for 'parsing' figure legends into sub-fragments corresponding to individual panels as used in SourceData annotations (https://sourcedata.embo.org).
To have a quick check of the model:
```python
from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification
example = """Fig 4. a, Volume density of early (Avi) and late (Avd) autophagic vacuoles.a, Volume density of early (Avi) and late (Avd) autophagic vacuoles from four independent cultures. Examples of Avi and Avd are shown in b and c, respectively. Bars represent 0.4����m. d, Labelling density of cathepsin-D as estimated in two independent experiments. e, Labelling density of LAMP-1."""
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
model = RobertaForTokenClassification.from_pretrained('EMBO/sd-panelization')
ner = pipeline('ner', model, tokenizer=tokenizer)
res = ner(example)
for r in res: print(r['word'], r['entity'])
```
#### Limitations and bias
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained for token classification using the [`EMBO/sd-nlp PANELIZATION`](https://huggingface.co/datasets/EMBO/sd-nlp) dataset which includes manually annotated examples.
## Training procedure
The training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Model fine-tuned: EMBO/bio-lm
- Tokenizer vocab size: 50265
- Training data: EMBO/sd-nlp
- Dataset configuration: PANELIZATION
- TTraining with 2175 examples.
- Evaluating on 622 examples.
- Training on 2 features: `O`, `B-PANEL_START`
- Epochs: 1.3
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
## Eval results
Testing on 1802 examples from test set with `sklearn.metrics`:
```
precision recall f1-score support
PANEL_START 0.89 0.95 0.92 5427
micro avg 0.89 0.95 0.92 5427
macro avg 0.89 0.95 0.92 5427
weighted avg 0.89 0.95 0.92 5427
```
|
EhsanAghazadeh/bert-based-uncased-sst2-e3 | 2d14d69cd498812bebca5ee27b7da01dea6f21fb | 2022-01-02T11:26:35.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | EhsanAghazadeh | null | EhsanAghazadeh/bert-based-uncased-sst2-e3 | 6 | null | transformers | 14,797 | Entry not found |
EhsanAghazadeh/bert-based-uncased-sst2-e5 | 9e8d7330603a7e972d20d79559ee660fd9d1dff7 | 2022-01-02T14:19:20.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | EhsanAghazadeh | null | EhsanAghazadeh/bert-based-uncased-sst2-e5 | 6 | null | transformers | 14,798 | Entry not found |
EthanChen0418/domain-cls-nine-classes | 79acb0f68d410dc5e495ae9c3350c970802c1fb0 | 2021-09-27T04:35:15.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
]
| text-classification | false | EthanChen0418 | null | EthanChen0418/domain-cls-nine-classes | 6 | null | transformers | 14,799 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.