modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
paulopirozelli/modelo-teste | d29bc570f84514271dbfe32b28f6ae16484ee515 | 2022-05-30T17:05:38.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:yelp_review_full",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | paulopirozelli | null | paulopirozelli/modelo-teste | 3 | null | transformers | 22,500 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
model-index:
- name: modelo-teste
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modelo-teste
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.1553 | 0.57 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
UBC-NLP/prags1 | 28205491df8e257d47bc84c617c4d77e997ad440 | 2022-06-02T22:53:46.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"license:cc-by-nc-3.0",
"autotrain_compatible"
] | fill-mask | false | UBC-NLP | null | UBC-NLP/prags1 | 3 | null | transformers | 22,501 | ---
license: cc-by-nc-3.0
---
PragS1: Pragmatic Masked Language Modeling with Hashtag_end dataset followed by Emoji-Based Surrogate Fine-Tuning
You can load this model and use for downstream fine-tuning. For example:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('UBC-NLP/prags1', use_fast = True)
model = AutoModelForSequenceClassification.from_pretrained('UBC-NLP/prags1',num_labels=lable_size)
```
More details are in our paper:
```
@inproceedings{zhang-abdul-mageed-2022-improving,
title = "Improving Social Meaning Detection with Pragmatic Masking and Surrogate Fine-Tuning",
author = "Zhang, Chiyu and
Abdul-Mageed, Muhammad",
booktitle = "Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.wassa-1.14",
pages = "141--156",
}
``` |
UBC-NLP/prags2 | 95c13300979256cc9e75aa4995b620e226fac406 | 2022-06-02T22:52:49.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"license:cc-by-nc-3.0",
"autotrain_compatible"
] | fill-mask | false | UBC-NLP | null | UBC-NLP/prags2 | 3 | null | transformers | 22,502 | ---
license: cc-by-nc-3.0
---
PragS2: Pragmatic Masked Language Modeling with Emoji_any dataset followed by Hashtag-Based Surrogate Fine-Tuning
You can load this model and use for downstream fine-tuning. For example:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('UBC-NLP/prags2', use_fast = True)
model = AutoModelForSequenceClassification.from_pretrained('UBC-NLP/prags2',num_labels=lable_size)
```
More details are in our paper:
```
@inproceedings{zhang-abdul-mageed-2022-improving,
title = "Improving Social Meaning Detection with Pragmatic Masking and Surrogate Fine-Tuning",
author = "Zhang, Chiyu and
Abdul-Mageed, Muhammad",
booktitle = "Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.wassa-1.14",
pages = "141--156",
}
``` |
Splend1dchan/xtreme_s_xlsr_300m_t5lephone_minds14.en-US_2 | c66f1c88c20f96178ebe6473dd0af1db795dabb9 | 2022-05-31T00:26:22.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers"
] | audio-classification | false | Splend1dchan | null | Splend1dchan/xtreme_s_xlsr_300m_t5lephone_minds14.en-US_2 | 3 | null | transformers | 22,503 | Entry not found |
Splend1dchan/xtreme_s_xlsr_300m_minds14.en-US_2 | 8f33d5aee43affb37c43c0a84c6ed2824c117026 | 2022-05-31T00:59:25.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"en-US",
"dataset:xtreme_s",
"transformers",
"minds14",
"google/xtreme_s",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | Splend1dchan | null | Splend1dchan/xtreme_s_xlsr_300m_minds14.en-US_2 | 3 | null | transformers | 22,504 | ---
language:
- en-US
license: apache-2.0
tags:
- minds14
- google/xtreme_s
- generated_from_trainer
datasets:
- xtreme_s
metrics:
- f1
- accuracy
model-index:
- name: xtreme_s_xlsr_300m_minds14.en-US_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_300m_minds14.en-US_2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MINDS14.EN-US dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5685
- F1: 0.8747
- Accuracy: 0.8759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 2.6195 | 3.95 | 20 | 2.6348 | 0.0172 | 0.0816 |
| 2.5925 | 7.95 | 40 | 2.6119 | 0.0352 | 0.0851 |
| 2.1271 | 11.95 | 60 | 2.3066 | 0.1556 | 0.1986 |
| 1.2618 | 15.95 | 80 | 1.3810 | 0.6877 | 0.7128 |
| 0.5455 | 19.95 | 100 | 1.0403 | 0.6992 | 0.7270 |
| 0.2571 | 23.95 | 120 | 0.8423 | 0.8160 | 0.8121 |
| 0.3478 | 27.95 | 140 | 0.6500 | 0.8516 | 0.8440 |
| 0.0732 | 31.95 | 160 | 0.7066 | 0.8123 | 0.8156 |
| 0.1092 | 35.95 | 180 | 0.5878 | 0.8767 | 0.8759 |
| 0.0271 | 39.95 | 200 | 0.5994 | 0.8578 | 0.8617 |
| 0.4664 | 43.95 | 220 | 0.7830 | 0.8403 | 0.8440 |
| 0.0192 | 47.95 | 240 | 0.5685 | 0.8747 | 0.8759 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
GiordanoB/mt5-base-finetuned-summarization-V2 | ae682a4f3e8f62997e189ae0b853252513750024 | 2022-05-31T16:24:46.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | GiordanoB | null | GiordanoB/mt5-base-finetuned-summarization-V2 | 3 | null | transformers | 22,505 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-summarization-V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-summarization-V2
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.3409
- Rouge1: 6.1259
- Rouge2: 1.4637
- Rougel: 5.3192
- Rougelsum: 5.7739
- Gen Len: 9.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 15 | 10.0266 | 6.7528 | 2.8064 | 5.9938 | 6.4352 | 10.0 |
| No log | 2.0 | 30 | 8.4159 | 6.1259 | 1.4637 | 5.3192 | 5.7739 | 10.0714 |
| No log | 3.0 | 45 | 8.3409 | 6.1259 | 1.4637 | 5.3192 | 5.7739 | 9.9286 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
hunkim/sentence-transformers-klue-bert-base | 4ae8cfae6a1a0e4a480cbcaff9a5c9f56c0f6cbc | 2022-05-31T06:46:31.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | hunkim | null | hunkim/sentence-transformers-klue-bert-base | 3 | null | sentence-transformers | 22,506 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# hunkim/sentence-transformers-klue-bert-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('hunkim/sentence-transformers-klue-bert-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hunkim/sentence-transformers-klue-bert-base')
model = AutoModel.from_pretrained('hunkim/sentence-transformers-klue-bert-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=hunkim/sentence-transformers-klue-bert-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 365 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 146,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
OneFly/xlm-roberta-base-finetuned-panx-de | 660305eda07c6e57b994be6499d6ce1a959b0365 | 2022-05-31T14:01:40.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | OneFly | null | OneFly/xlm-roberta-base-finetuned-panx-de | 3 | 1 | transformers | 22,507 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8620945214069894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/hellokitty | 4629aa368b95d831c96fc3fb057e01e2724dfb88 | 2022-05-31T08:42:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/hellokitty | 3 | null | transformers | 22,508 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1476611165157355521/-lvlmsRT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Hello Kitty</div>
<div style="text-align: center; font-size: 14px;">@hellokitty</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Hello Kitty.
| Data | Hello Kitty |
| --- | --- |
| Tweets downloaded | 3218 |
| Retweets | 286 |
| Short tweets | 117 |
| Tweets kept | 2815 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/32b69c39/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hellokitty's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1npkfvyz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1npkfvyz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hellokitty')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Jorgeutd/distilbart-cnn-12-6-finetuned-xsum | f0d48cafba853789619207c1cd61c17437fde278 | 2022-05-31T14:48:25.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Jorgeutd | null | Jorgeutd/distilbart-cnn-12-6-finetuned-xsum | 3 | null | transformers | 22,509 | Entry not found |
ceggian/sbart_pt_reddit_softmax_32 | b94bffa136d8236141ea213f62539d9da22cfe93 | 2022-06-01T07:41:57.000Z | [
"pytorch",
"bart",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ceggian | null | ceggian/sbart_pt_reddit_softmax_32 | 3 | null | sentence-transformers | 22,510 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 117759 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11775,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BartModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jonahank/KlimaBERT | 839102649c2670de6d0938e7147aefa308a48f65 | 2022-06-18T11:20:26.000Z | [
"pytorch",
"bert",
"text-classification",
"da",
"danish",
"arxiv:1810.04805",
"transformers",
"climate change",
"climate-classifier",
"political quotes",
"klimabert"
] | text-classification | false | jonahank | null | jonahank/KlimaBERT | 3 | 3 | transformers | 22,511 | ---
language:
- da
- danish
tags:
- climate change
- climate-classifier
- political quotes
- klimabert
---
# Identifying and Analysing political quotes from the Danish Parliament related to climate change using NLP
**KlimaBERT**, a sequence-classifier fine-tuned to predict whether political quotes are climate-related. When predicting the positive class 1, "climate-related", the model achieves a F1-score of 0.97, Precision of 0.97, and Recall of 0.97. The negative class, 0, is defined as "non-climate-related".
KlimaBERT is fine-tuned using the pre-trained DaBERT-uncased model, on a training set of 1.000 manually labelled data-points. The training set contains both political quotes and summaries of bills from the [Danish Parliament](https://www.ft.dk/).
The model is created to identify political quotes related to climate change, and performs best on official texts from the Danish Parliament.
### Fine-tuning
To fine-tune a model similar to KlimaBERT, follow the [fine-tuning notebooks](https://github.com/jonahank/Vote-Prediction-Model/tree/main/climate_classifier)
### References
BERT:
Devlin, J., M.-W. Chang, K. Lee, and K. Toutanova (2018). Bert: Pre-training of deep
bidirectional transformers for language understanding.
https://arxiv.org/abs/1810.04805
DaBERT:
Certainly (2021). Certainly has trained the most advanced danish bert model to date.
https://www.certainly.io/blog/danish-bert-model/.
### Acknowledgements
The resources are created through the work of my Master's thesis, so I would like to thank my supervisors [Leon Derczynski](https://www.derczynski.com/itu/) and [Vedran Sekara](https://vedransekara.github.io/) for the great support throughout the project! And a HUGE thanks to [Gustav Gyrst](https://github.com/Gyrst) for great sparring and co-development of the tools you find in this repo.
### Contact
For any further help, questions, comments etc. feel free to contact the author Jonathan Kristensen on [LinedIn](https://www.linkedin.com/in/jonathan-kristensen-444a96104) or by creating a "discussion" on this model's page.
|
chrisvinsen/wav2vec2-19 | ec30153fc7bd9494e36d2f537461502a41110f17 | 2022-06-02T09:03:33.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/wav2vec2-19 | 3 | null | transformers | 22,512 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-19
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6305
- Wer: 0.4499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4816 | 2.74 | 400 | 1.0717 | 0.8927 |
| 0.751 | 5.48 | 800 | 0.7155 | 0.7533 |
| 0.517 | 8.22 | 1200 | 0.7039 | 0.6675 |
| 0.3988 | 10.96 | 1600 | 0.5935 | 0.6149 |
| 0.3179 | 13.7 | 2000 | 0.6477 | 0.5999 |
| 0.2755 | 16.44 | 2400 | 0.5549 | 0.5798 |
| 0.2343 | 19.18 | 2800 | 0.6626 | 0.5798 |
| 0.2103 | 21.92 | 3200 | 0.6488 | 0.5674 |
| 0.1877 | 24.66 | 3600 | 0.5874 | 0.5339 |
| 0.1719 | 27.4 | 4000 | 0.6354 | 0.5389 |
| 0.1603 | 30.14 | 4400 | 0.6612 | 0.5210 |
| 0.1401 | 32.88 | 4800 | 0.6676 | 0.5131 |
| 0.1286 | 35.62 | 5200 | 0.6366 | 0.5075 |
| 0.1159 | 38.36 | 5600 | 0.6064 | 0.4977 |
| 0.1084 | 41.1 | 6000 | 0.6530 | 0.4835 |
| 0.0974 | 43.84 | 6400 | 0.6118 | 0.4853 |
| 0.0879 | 46.58 | 6800 | 0.6316 | 0.4770 |
| 0.0815 | 49.32 | 7200 | 0.6125 | 0.4664 |
| 0.0708 | 52.05 | 7600 | 0.6449 | 0.4683 |
| 0.0651 | 54.79 | 8000 | 0.6068 | 0.4571 |
| 0.0555 | 57.53 | 8400 | 0.6305 | 0.4499 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
adnankhawaja/XLNET_RU | 14a74f1202f0af05885b0e91859a40c69c248be8 | 2022-06-04T08:43:51.000Z | [
"pytorch",
"xlnet",
"transformers"
] | null | false | adnankhawaja | null | adnankhawaja/XLNET_RU | 3 | null | transformers | 22,513 | Entry not found |
gianfrancodemarco/distilbert-base-uncased-finetuned-final-nlp | 55eaebbe365366d32935b816a4e9e7da962811db | 2022-06-01T12:46:08.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | gianfrancodemarco | null | gianfrancodemarco/distilbert-base-uncased-finetuned-final-nlp | 3 | null | transformers | 22,514 | Entry not found |
lorenzkuhn/distilbert-base-uncased-finetuned-squad | dc8b0e8f07d93cfadd4e109e02f6e6a74fcdf00b | 2022-06-06T10:52:07.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | lorenzkuhn | null | lorenzkuhn/distilbert-base-uncased-finetuned-squad | 3 | null | transformers | 22,515 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2156 | 1.0 | 8235 | 1.1791 |
| 0.9413 | 2.0 | 16470 | 1.2182 |
| 0.7514 | 3.0 | 24705 | 1.3206 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
devprisha/DialoGPT-small-cassandroid | 10aabf1abafcc3dbaaba20d7bb8adcaf6264a35d | 2022-06-01T17:49:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | devprisha | null | devprisha/DialoGPT-small-cassandroid | 3 | null | transformers | 22,516 | Entry not found |
AnonymousSub/rule_based_roberta_hier_triplet_shuffled_paras_epochs_1_shard_1_squad2.0 | 521b58c8a993e5290cd1b1f5d9c2d49826fe74e6 | 2022-06-01T16:45:03.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_hier_triplet_shuffled_paras_epochs_1_shard_1_squad2.0 | 3 | null | transformers | 22,517 | Entry not found |
mohibhameed/wav2vec2-large-xls-r-urdu-colab | 14dddcc1fb2fccaff26e0df6695e3ebe8c866be4 | 2022-06-02T19:45:33.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | mohibhameed | null | mohibhameed/wav2vec2-large-xls-r-urdu-colab | 3 | null | transformers | 22,518 | Entry not found |
lmqg/t5-large-subjqa-restaurants | bb970120430d1e8f6659b746e504c688e5697687 | 2022-06-02T22:06:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lmqg | null | lmqg/t5-large-subjqa-restaurants | 3 | null | transformers | 22,519 | Entry not found |
chrisvinsen/wav2vec2-final-1-lm-1 | a87e676a7cf0fa4f04257027de8b785b99741916 | 2022-06-02T11:08:55.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/wav2vec2-final-1-lm-1 | 3 | null | transformers | 22,520 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-19
WER 0.283
WER 0.129 with 2-Gram
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6305
- Wer: 0.4499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4816 | 2.74 | 400 | 1.0717 | 0.8927 |
| 0.751 | 5.48 | 800 | 0.7155 | 0.7533 |
| 0.517 | 8.22 | 1200 | 0.7039 | 0.6675 |
| 0.3988 | 10.96 | 1600 | 0.5935 | 0.6149 |
| 0.3179 | 13.7 | 2000 | 0.6477 | 0.5999 |
| 0.2755 | 16.44 | 2400 | 0.5549 | 0.5798 |
| 0.2343 | 19.18 | 2800 | 0.6626 | 0.5798 |
| 0.2103 | 21.92 | 3200 | 0.6488 | 0.5674 |
| 0.1877 | 24.66 | 3600 | 0.5874 | 0.5339 |
| 0.1719 | 27.4 | 4000 | 0.6354 | 0.5389 |
| 0.1603 | 30.14 | 4400 | 0.6612 | 0.5210 |
| 0.1401 | 32.88 | 4800 | 0.6676 | 0.5131 |
| 0.1286 | 35.62 | 5200 | 0.6366 | 0.5075 |
| 0.1159 | 38.36 | 5600 | 0.6064 | 0.4977 |
| 0.1084 | 41.1 | 6000 | 0.6530 | 0.4835 |
| 0.0974 | 43.84 | 6400 | 0.6118 | 0.4853 |
| 0.0879 | 46.58 | 6800 | 0.6316 | 0.4770 |
| 0.0815 | 49.32 | 7200 | 0.6125 | 0.4664 |
| 0.0708 | 52.05 | 7600 | 0.6449 | 0.4683 |
| 0.0651 | 54.79 | 8000 | 0.6068 | 0.4571 |
| 0.0555 | 57.53 | 8400 | 0.6305 | 0.4499 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
serhanciftlikci/improved_adversarial_nli_model | 4da750f11f5178f397a6ebcaefe541e0af3879c3 | 2022-06-02T19:38:38.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | serhanciftlikci | null | serhanciftlikci/improved_adversarial_nli_model | 3 | null | transformers | 22,521 | ---
license: mit
---
|
chrisvinsen/wav2vec2-23 | 7cc38e0dd1e3fc84e3353abb3dcee90c93b85dcb | 2022-06-03T06:15:21.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/wav2vec2-23 | 3 | null | transformers | 22,522 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-23
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1230
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 4.2642 | 1.37 | 200 | 2.9756 | 1.0 |
| 2.8574 | 2.74 | 400 | 3.1631 | 1.0 |
| 2.8588 | 4.11 | 600 | 3.1208 | 1.0 |
| 2.8613 | 5.48 | 800 | 3.1113 | 1.0 |
| 2.8599 | 6.85 | 1000 | 3.2679 | 1.0 |
| 2.8577 | 8.22 | 1200 | 3.0904 | 1.0 |
| 2.8575 | 9.59 | 1400 | 3.2444 | 1.0 |
| 2.8538 | 10.96 | 1600 | 3.0674 | 1.0 |
| 2.8564 | 12.33 | 1800 | 3.1957 | 1.0 |
| 2.8555 | 13.7 | 2000 | 3.0881 | 1.0 |
| 2.8542 | 15.07 | 2200 | 3.1488 | 1.0 |
| 2.8538 | 16.44 | 2400 | 3.1184 | 1.0 |
| 2.854 | 17.81 | 2600 | 3.1133 | 1.0 |
| 2.8553 | 19.18 | 2800 | 3.1508 | 1.0 |
| 2.8534 | 20.55 | 3000 | 3.0646 | 1.0 |
| 2.8538 | 21.92 | 3200 | 3.1374 | 1.0 |
| 2.8545 | 23.29 | 3400 | 3.1020 | 1.0 |
| 2.8539 | 24.66 | 3600 | 3.1631 | 1.0 |
| 2.8558 | 26.03 | 3800 | 3.1063 | 1.0 |
| 2.8508 | 27.4 | 4000 | 3.1271 | 1.0 |
| 2.8537 | 28.77 | 4200 | 3.1230 | 1.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
baru98/distilbert-base-uncased-finetuned-squad | cae0e53d50fc8a68dc17365ac1c9b91340227f7f | 2022-06-03T13:54:01.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | baru98 | null | baru98/distilbert-base-uncased-finetuned-squad | 3 | null | transformers | 22,523 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2393 | 1.0 | 5475 | 1.1570 |
| 0.9651 | 2.0 | 10950 | 1.0903 |
| 0.7513 | 3.0 | 16425 | 1.1274 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Worldman/pega_570_articles | bc1d1eca5375467ebc4cb5a38c859e50c3d3cccf | 2022-06-03T14:51:50.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Worldman | null | Worldman/pega_570_articles | 3 | null | transformers | 22,524 | ---
tags:
- generated_from_trainer
model-index:
- name: pega_570_articles
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pega_570_articles
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
baru98/bert-base-cased-finetuned-squad | 4885c1ecc3827cc66a8600ad18de1e497c55748e | 2022-06-04T02:53:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | baru98 | null | baru98/bert-base-cased-finetuned-squad | 3 | null | transformers | 22,525 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 7 | 5.7012 |
| No log | 2.0 | 14 | 5.5021 |
| No log | 3.0 | 21 | 5.4212 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
yanekyuk/convberturk-keyword-extractor | ed8719caae12212ffad155ff7ae236730b3e06e9 | 2022-06-04T11:19:51.000Z | [
"pytorch",
"convbert",
"token-classification",
"tr",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | yanekyuk | null | yanekyuk/convberturk-keyword-extractor | 3 | null | transformers | 22,526 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
- f1
language:
- tr
widget:
- text: "İngiltere'de düzenlenen Avrupa Tekvando ve Para Tekvando Şampiyonası’nda millî tekvandocular 5 altın, 2 gümüş ve 4 bronz olmak üzere 11, millî para tekvandocular ise 4 altın, 3 gümüş ve 1 bronz olmak üzere 8 madalya kazanarak takım halinde Avrupa şampiyonu oldu."
- text: "Füme somon dedik ama aslında lox salamuralanmış somon anlamına geliyor, füme etme opsiyonel. Lox bagel, 1930'larda Eggs Benedict furyasında New Yorklu Yahudi cemaati tarafından koşer bir alternatif olarak çıkan bir lezzet. Günümüzde benim hangover yüreğim dâhil dünyanın birçok yerinde enfes bir kahvaltı sandviçi."
- text: "Türkiye'de son aylarda sıklıkla tartışılan konut satışı karşılığında yabancılara vatandaşlık verilmesi konusunu beyin göçü kapsamında ele almak mümkün. Daha önce 250 bin dolar olan vatandaşlık bedeli yükselen tepkiler üzerine 400 bin dolara çıkarılmıştı. Türkiye'den göç eden iyi eğitimli kişilerin , gittikleri ülkelerde 250 bin dolar tutarında yabancı yatırıma denk olduğu göz önüne alındığında nitelikli insan gücünün yabancılara konut karşılığında satılan vatandaşlık bedelin eş olduğunu görüyoruz. Yurt dışına giden her bir vatandaşın yüksek teknolojili katma değer üreten sektörlere yapacağı katkılar göz önünde bulundurulduğunda bu açığın inşaat sektörüyle kapatıldığını da görüyoruz. Beyin göçü konusunda sadece ekonomik perspektiften bakıldığında bile kısa vadeli döviz kaynağı yaratmak için kullanılan vatandaşlık satışı yerine beyin göçünü önleyecek önlemler alınmasının ülkemize çok daha faydalı olacağı sonucunu çıkarıyoruz."
- text: "Türkiye’de resmî verilere göre, 15 ve daha yukarı yaştaki kişilerde mevsim etkisinden arındırılmış işsiz sayısı, bu yılın ilk çeyreğinde bir önceki çeyreğe göre 50 bin kişi artarak 3 milyon 845 bin kişi oldu. Mevsim etkisinden arındırılmış işsizlik oranı ise 0,1 puanlık artışla %11,4 seviyesinde gerçekleşti. İşsizlik oranı, ilk çeyrekte geçen yılın aynı çeyreğine göre 1,7 puan azaldı."
- text: "Boeing’in insansız uzay aracı Starliner, birtakım sorunlara rağmen Uluslararası Uzay İstasyonuna (ISS) ulaşarak ilk kez başarılı bir şekilde kenetlendi. Aracın ISS’te beş gün kalmasını takiben sorunsuz bir şekilde New Mexico’ya inmesi halinde Boeing, sonbaharda astronotları yörüngeye göndermek için Starliner’ı kullanabilir.\n\nNeden önemli? NASA’nın personal aracı üretmeyi durdurmasından kaynaklı olarak görevli astronotlar ve kozmonotlar, ISS’te Rusya’nın ürettiği uzay araçları ile taşınıyordu. Starliner’ın kendini kanıtlaması ise bu konuda Rusya’ya olan bağımlılığın potansiyel olarak ortadan kalkabileceği anlamına geliyor."
model-index:
- name: convberturk-keyword-extractor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convberturk-keyword-extractor
This model is a fine-tuned version of [dbmdz/convbert-base-turkish-cased](https://huggingface.co/dbmdz/convbert-base-turkish-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4098
- Precision: 0.6742
- Recall: 0.7035
- Accuracy: 0.9175
- F1: 0.6886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:------:|
| 0.174 | 1.0 | 1875 | 0.1920 | 0.6546 | 0.6869 | 0.9184 | 0.6704 |
| 0.1253 | 2.0 | 3750 | 0.2030 | 0.6527 | 0.7317 | 0.9179 | 0.6900 |
| 0.091 | 3.0 | 5625 | 0.2517 | 0.6499 | 0.7473 | 0.9163 | 0.6952 |
| 0.0684 | 4.0 | 7500 | 0.2828 | 0.6633 | 0.7270 | 0.9167 | 0.6937 |
| 0.0536 | 5.0 | 9375 | 0.3307 | 0.6706 | 0.7194 | 0.9180 | 0.6942 |
| 0.0384 | 6.0 | 11250 | 0.3669 | 0.6655 | 0.7161 | 0.9157 | 0.6898 |
| 0.0316 | 7.0 | 13125 | 0.3870 | 0.6792 | 0.7002 | 0.9176 | 0.6895 |
| 0.0261 | 8.0 | 15000 | 0.4098 | 0.6742 | 0.7035 | 0.9175 | 0.6886 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
mishtert/iec | aca6fe1297bdde608cd04315f9197abd1dcbc08c | 2022-06-04T18:01:26.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"dataset:funsd",
"transformers",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | mishtert | null | mishtert/iec | 3 | null | transformers | 22,527 | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- funsd
model_index:
- name: layoutlmv2-finetuned-funsd
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: funsd
type: funsd
args: funsd
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-funsd
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the funsd dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.9.0.dev0
- Pytorch 1.8.0+cu101
- Datasets 1.9.0
- Tokenizers 0.10.3
|
Abdullah010/wav2vec2-urdu-asr-commom-voice-9.0_model_final | f16f5e49cb8ffaf789261b2327ddbe1360e09b31 | 2022-06-05T11:59:48.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Abdullah010 | null | Abdullah010/wav2vec2-urdu-asr-commom-voice-9.0_model_final | 3 | null | transformers | 22,528 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-urdu-asr-commom-voice-9.0_model_final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-urdu-asr-commom-voice-9.0_model_final
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9620
- Wer: 1.0059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 13.2331 | 1.47 | 300 | 4.0116 | 1.0 |
| 3.3351 | 2.94 | 600 | 3.1680 | 1.0 |
| 3.1149 | 4.41 | 900 | 2.9620 | 1.0059 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
sayanmandal/t5-small_6_3-en-hi_en_LinCE | 61731623c32678116553ef5de7ba031b075cdab6 | 2022-06-05T00:31:38.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"translation",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | translation | false | sayanmandal | null | sayanmandal/t5-small_6_3-en-hi_en_LinCE | 3 | null | transformers | 22,529 | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-small_6_3-en-hi_en_LinCE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small_6_3-en-hi_en_LinCE
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2034
- Bleu: 7.8135
- Gen Len: 39.5564
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 0.99 | 94 | 3.5424 | 0.9187 | 16.7437 |
| No log | 1.99 | 188 | 3.1434 | 1.2886 | 16.8158 |
| No log | 2.99 | 282 | 2.9494 | 1.4577 | 16.7824 |
| No log | 3.99 | 376 | 2.8233 | 1.4745 | 16.8879 |
| No log | 4.99 | 470 | 2.7300 | 1.7116 | 16.6636 |
| 3.6303 | 5.99 | 564 | 2.6589 | 1.7857 | 16.6302 |
| 3.6303 | 6.99 | 658 | 2.6005 | 1.8572 | 16.4553 |
| 3.6303 | 7.99 | 752 | 2.5456 | 2.139 | 16.3925 |
| 3.6303 | 8.99 | 846 | 2.5023 | 2.3835 | 16.2911 |
| 3.6303 | 9.99 | 940 | 2.4725 | 2.5607 | 16.3271 |
| 2.9087 | 10.99 | 1034 | 2.4272 | 2.6614 | 16.3138 |
| 2.9087 | 11.99 | 1128 | 2.3977 | 2.9623 | 16.3338 |
| 2.9087 | 12.99 | 1222 | 2.3686 | 3.1248 | 16.2443 |
| 2.9087 | 13.99 | 1316 | 2.3438 | 3.3294 | 16.3458 |
| 2.9087 | 14.99 | 1410 | 2.3253 | 3.3885 | 16.3591 |
| 2.6588 | 15.99 | 1504 | 2.3028 | 3.3985 | 16.3124 |
| 2.6588 | 16.99 | 1598 | 2.2839 | 3.3772 | 16.3858 |
| 2.6588 | 17.99 | 1692 | 2.2704 | 3.5804 | 16.3872 |
| 2.6588 | 18.99 | 1786 | 2.2533 | 3.8751 | 16.2697 |
| 2.6588 | 19.99 | 1880 | 2.2378 | 4.0003 | 16.3271 |
| 2.6588 | 20.99 | 1974 | 2.2233 | 4.0271 | 16.3031 |
| 2.5079 | 21.99 | 2068 | 2.2160 | 4.1898 | 16.3057 |
| 2.5079 | 22.99 | 2162 | 2.2010 | 4.1216 | 16.3031 |
| 2.5079 | 23.99 | 2256 | 2.1935 | 4.1311 | 16.2644 |
| 2.5079 | 24.99 | 2350 | 2.1833 | 4.1373 | 16.3138 |
| 2.5079 | 25.99 | 2444 | 2.1725 | 4.3471 | 16.3057 |
| 2.4027 | 26.99 | 2538 | 2.1657 | 4.183 | 16.3298 |
| 2.4027 | 27.99 | 2632 | 2.1611 | 4.2867 | 16.3351 |
| 2.4027 | 28.99 | 2726 | 2.1531 | 4.2689 | 16.2737 |
| 2.4027 | 29.99 | 2820 | 2.1482 | 4.4802 | 16.2644 |
| 2.4027 | 30.99 | 2914 | 2.1443 | 4.469 | 16.231 |
| 2.3251 | 31.99 | 3008 | 2.1375 | 4.5295 | 16.227 |
| 2.3251 | 32.99 | 3102 | 2.1330 | 4.4799 | 16.2243 |
| 2.3251 | 33.99 | 3196 | 2.1307 | 4.7124 | 16.2417 |
| 2.3251 | 34.99 | 3290 | 2.1248 | 4.5954 | 16.3004 |
| 2.3251 | 35.99 | 3384 | 2.1215 | 4.7455 | 16.215 |
| 2.3251 | 36.99 | 3478 | 2.1166 | 4.6233 | 16.2016 |
| 2.2818 | 37.99 | 3572 | 2.1147 | 4.6843 | 16.219 |
| 2.2818 | 38.99 | 3666 | 2.1112 | 4.7068 | 16.2163 |
| 2.2818 | 39.99 | 3760 | 2.1071 | 4.684 | 16.223 |
| 2.2818 | 40.99 | 3854 | 2.1034 | 4.7323 | 16.2523 |
| 2.2818 | 41.99 | 3948 | 2.0998 | 4.6406 | 16.2016 |
| 2.2392 | 42.99 | 4042 | 2.1017 | 4.7609 | 16.1976 |
| 2.2392 | 43.99 | 4136 | 2.1021 | 4.7634 | 16.2069 |
| 2.2392 | 44.99 | 4230 | 2.0994 | 4.7854 | 16.1976 |
| 2.2392 | 45.99 | 4324 | 2.0980 | 4.7562 | 16.2243 |
| 2.2392 | 46.99 | 4418 | 2.0964 | 4.7921 | 16.219 |
| 2.2192 | 47.99 | 4512 | 2.0970 | 4.8029 | 16.2377 |
| 2.2192 | 48.99 | 4606 | 2.0967 | 4.7953 | 16.2176 |
| 2.2192 | 49.99 | 4700 | 2.0968 | 4.819 | 16.2457 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ITESM/st_demo_3 | 8fb14d6efd7f60396fea33d276740709d57a77bb | 2022-06-05T04:43:41.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | ITESM | null | ITESM/st_demo_3 | 3 | null | transformers | 22,530 | Entry not found |
EmileEsmaili/gpt2-p4k | 05e0d6c24e76ae85c9707e2714655ec50575d55e | 2022-06-09T14:55:23.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | EmileEsmaili | null | EmileEsmaili/gpt2-p4k | 3 | null | transformers | 22,531 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-p4k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-p4k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
erickfm/t5-large-finetuned-bias-v7 | f5392fdd07cccaf638180ac36559c61af7a2d426 | 2022-06-05T18:29:19.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-large-finetuned-bias-v7 | 3 | null | transformers | 22,532 | Entry not found |
Bistolero/german_2EP | 4c361e94454b120157bd842d57316fe359746bfe | 2022-06-05T18:43:48.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Bistolero | null | Bistolero/german_2EP | 3 | null | transformers | 22,533 | Entry not found |
Bistolero/ge_nl_64B_25K | fdcb3dd035131d1681c64eaae8e3c6b23cbedd1f | 2022-06-05T20:42:30.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Bistolero | null | Bistolero/ge_nl_64B_25K | 3 | null | transformers | 22,534 | Entry not found |
chrisvinsen/xlsr-wav2vec2-final-lm | 5efcdd00f411631c007ea251733c02bdda1fbdde | 2022-06-06T01:26:39.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/xlsr-wav2vec2-final-lm | 3 | null | transformers | 22,535 | Entry not found |
anlausch/aq_bert_gaq_mt | 53b539bfb19e1971d4d71e6cc7eef198222f8fb5 | 2022-06-06T08:09:38.000Z | [
"pytorch",
"bert",
"transformers",
"license:mit"
] | null | false | anlausch | null | anlausch/aq_bert_gaq_mt | 3 | null | transformers | 22,536 |
---
license: mit
---
Multi-task learning model (flat architecture) trained on GAQCorpus for 4 epochs with a learning rate of 2e-5 (optimised via grid search) in a similar way as in Lauscher et al. 2020 (see below). The original model was Tensorflow-based. This model corresponds to a reimplementation with Transformers & PyTorch.
```
@inproceedings{lauscher-etal-2020-rhetoric,
title = "Rhetoric, Logic, and Dialectic: Advancing Theory-based Argument Quality Assessment in Natural Language Processing",
author = "Lauscher, Anne and
Ng, Lily and
Napoles, Courtney and
Tetreault, Joel",
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
month = dec,
year = "2020",
address = "Barcelona, Spain (Online)",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2020.coling-main.402",
doi = "10.18653/v1/2020.coling-main.402",
pages = "4563--4574",
abstract = "Though preceding work in computational argument quality (AQ) mostly focuses on assessing overall AQ, researchers agree that writers would benefit from feedback targeting individual dimensions of argumentation theory. However, a large-scale theory-based corpus and corresponding computational models are missing. We fill this gap by conducting an extensive analysis covering three diverse domains of online argumentative writing and presenting GAQCorpus: the first large-scale English multi-domain (community Q{\&}A forums, debate forums, review forums) corpus annotated with theory-based AQ scores. We then propose the first computational approaches to theory-based assessment, which can serve as strong baselines for future work. We demonstrate the feasibility of large-scale AQ annotation, show that exploiting relations between dimensions yields performance improvements, and explore the synergies between theory-based prediction and practical AQ assessment.",
}
``` |
asahi417/lmqg-mt5-small-koquad | af41199edfb1f3a8ff4dce136d997cb35048d921 | 2022-06-08T22:41:52.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | asahi417 | null | asahi417/lmqg-mt5-small-koquad | 3 | null | transformers | 22,537 | Entry not found |
lewtun/distilroberta-base-finetuned-banking77 | 9a6c87e596835e157e66d051c2d3b753b6941618 | 2022-06-06T12:43:37.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | lewtun | null | lewtun/distilroberta-base-finetuned-banking77 | 3 | null | transformers | 22,538 | Entry not found |
Splend1dchan/xtreme_s_xlsr_300m_minds14 | f09a41d6134e13b8db22ccfb901657d0307bddcf | 2022-06-06T18:51:16.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers"
] | audio-classification | false | Splend1dchan | null | Splend1dchan/xtreme_s_xlsr_300m_minds14 | 3 | null | transformers | 22,539 | Entry not found |
mmillet/rubert-tiny2_finetuned_emotion_experiment_augmented_anger_fear_no_punct | 92e250aa38df1dbc25dbab39534e1adb73971846 | 2022-06-06T18:02:33.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | mmillet | null | mmillet/rubert-tiny2_finetuned_emotion_experiment_augmented_anger_fear_no_punct | 3 | null | transformers | 22,540 | Entry not found |
huggingtweets/dkostanjsak-nonewthing | ffa2cf0841a1c7a5278b1e8f6629cb528f0b6068 | 2022-06-06T14:56:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/dkostanjsak-nonewthing | 3 | null | transformers | 22,541 | ---
language: en
thumbnail: http://www.huggingtweets.com/dkostanjsak-nonewthing/1654527393385/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1532336212412977152/TWPqTO8d_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1022510453895974912/Z-B8B9eT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">AI & Domagoj Kostanjšak</div>
<div style="text-align: center; font-size: 14px;">@dkostanjsak-nonewthing</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from AI & Domagoj Kostanjšak.
| Data | AI | Domagoj Kostanjšak |
| --- | --- | --- |
| Tweets downloaded | 3247 | 3247 |
| Retweets | 100 | 202 |
| Short tweets | 237 | 179 |
| Tweets kept | 2910 | 2866 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/9p2u0a0m/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dkostanjsak-nonewthing's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/gp2198uq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/gp2198uq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dkostanjsak-nonewthing')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
cammy/wav2vec2-xlsr-greek-speech-emotion-recognition | 5e14d306e204bf449f2024ddbd01a575a91e6fbe | 2022-06-06T19:17:25.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | null | false | cammy | null | cammy/wav2vec2-xlsr-greek-speech-emotion-recognition | 3 | null | transformers | 22,542 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-xlsr-greek-speech-emotion-recognition
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-greek-speech-emotion-recognition
This model is a fine-tuned version of [lighteternal/wav2vec2-large-xlsr-53-greek](https://huggingface.co/lighteternal/wav2vec2-large-xlsr-53-greek) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7699
- Accuracy: 0.8168
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5594 | 0.22 | 100 | 0.7689 | 0.7649 |
| 0.4341 | 0.44 | 200 | 0.6557 | 0.8045 |
| 0.2925 | 0.66 | 300 | 0.7060 | 0.8094 |
| 0.3846 | 0.88 | 400 | 0.7699 | 0.8168 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
Cole/xlm-roberta-base-finetuned-panx-de | e084742673f76250dce92d820f9314c16da52d17 | 2022-06-08T15:27:30.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | Cole | null | Cole/xlm-roberta-base-finetuned-panx-de | 3 | null | transformers | 22,543 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8662369516855856
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1428
- F1: 0.8662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2499 | 1.0 | 1049 | 0.1916 | 0.8157 |
| 0.1312 | 2.0 | 2098 | 0.1394 | 0.8479 |
| 0.0809 | 3.0 | 3147 | 0.1428 | 0.8662 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
nestoralvaro/mt5-base-finetuned-xsum-mlsum___summary_text_google_mt5_base | 8cacfd6057b42ba014e29f99b66d621e2af85b6c | 2022-06-07T02:18:15.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:mlsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | nestoralvaro | null | nestoralvaro/mt5-base-finetuned-xsum-mlsum___summary_text_google_mt5_base | 3 | null | transformers | 22,544 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mlsum
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-xsum-mlsum___summary_text_google_mt5_base
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: mlsum
type: mlsum
args: es
metrics:
- name: Rouge1
type: rouge
value: 8.9973
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-xsum-mlsum___summary_text_google_mt5_base
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the mlsum dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 8.9973
- Rouge2: 0.9036
- Rougel: 7.6699
- Rougelsum: 7.716
- Gen Len: 10.2326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 66592 | nan | 8.9973 | 0.9036 | 7.6699 | 7.716 | 10.2326 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
enoriega/rule_learning_test | 4eec796554c86cb69d0c441d7309a29c8a8138da | 2022-06-07T05:19:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"dataset:enoriega/odinsynth_dataset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | null | false | enoriega | null | enoriega/rule_learning_test | 3 | null | transformers | 22,545 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- enoriega/odinsynth_dataset
model-index:
- name: rule_learning_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rule_learning_test
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the enoriega/odinsynth_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 1000
- total_train_batch_size: 8000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1764 | 0.32 | 20 | 0.2303 |
| 0.145 | 0.64 | 40 | 0.1470 |
| 0.129 | 0.96 | 60 | 0.1321 |
| 0.1256 | 1.29 | 80 | 0.1265 |
| 0.1304 | 1.61 | 100 | 0.1252 |
| 0.1235 | 1.93 | 120 | 0.1260 |
| 0.125 | 2.26 | 140 | 0.1261 |
| 0.1263 | 2.58 | 160 | 0.1262 |
| 0.1244 | 2.9 | 180 | 0.1256 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Vkt/model-facebookptbrlarge | 067c935f00d2734b00e80266cfd5b2bd0f376c80 | 2022-06-08T15:05:20.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Vkt | null | Vkt/model-facebookptbrlarge | 3 | null | transformers | 22,546 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: model-facebookptbrlarge
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model-facebookptbrlarge
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53-portuguese](https://huggingface.co/facebook/wav2vec2-large-xlsr-53-portuguese) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2206
- Wer: 0.1322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.8975 | 0.29 | 400 | 0.4131 | 0.3336 |
| 0.5131 | 0.57 | 800 | 0.4103 | 0.3293 |
| 0.4846 | 0.86 | 1200 | 0.3493 | 0.3028 |
| 0.4174 | 1.14 | 1600 | 0.3055 | 0.2730 |
| 0.4105 | 1.43 | 2000 | 0.3283 | 0.3041 |
| 0.4028 | 1.72 | 2400 | 0.3539 | 0.3210 |
| 0.386 | 2.0 | 2800 | 0.2925 | 0.2690 |
| 0.3224 | 2.29 | 3200 | 0.2842 | 0.2665 |
| 0.3122 | 2.57 | 3600 | 0.2781 | 0.2472 |
| 0.3087 | 2.86 | 4000 | 0.2794 | 0.2692 |
| 0.2878 | 3.15 | 4400 | 0.2795 | 0.2537 |
| 0.2915 | 3.43 | 4800 | 0.2764 | 0.2478 |
| 0.2816 | 3.72 | 5200 | 0.2761 | 0.2366 |
| 0.283 | 4.0 | 5600 | 0.2641 | 0.2587 |
| 0.2448 | 4.29 | 6000 | 0.2489 | 0.2417 |
| 0.247 | 4.57 | 6400 | 0.2538 | 0.2422 |
| 0.25 | 4.86 | 6800 | 0.2660 | 0.2306 |
| 0.2256 | 5.15 | 7200 | 0.2477 | 0.2267 |
| 0.2225 | 5.43 | 7600 | 0.2364 | 0.2195 |
| 0.2217 | 5.72 | 8000 | 0.2319 | 0.2139 |
| 0.2272 | 6.0 | 8400 | 0.2489 | 0.2427 |
| 0.2016 | 6.29 | 8800 | 0.2404 | 0.2181 |
| 0.1973 | 6.58 | 9200 | 0.2532 | 0.2273 |
| 0.2101 | 6.86 | 9600 | 0.2590 | 0.2100 |
| 0.1946 | 7.15 | 10000 | 0.2414 | 0.2108 |
| 0.1845 | 7.43 | 10400 | 0.2485 | 0.2124 |
| 0.1861 | 7.72 | 10800 | 0.2405 | 0.2124 |
| 0.1851 | 8.01 | 11200 | 0.2449 | 0.2062 |
| 0.1587 | 8.29 | 11600 | 0.2510 | 0.2048 |
| 0.1694 | 8.58 | 12000 | 0.2290 | 0.2059 |
| 0.1637 | 8.86 | 12400 | 0.2376 | 0.2063 |
| 0.1594 | 9.15 | 12800 | 0.2307 | 0.1967 |
| 0.1537 | 9.44 | 13200 | 0.2274 | 0.2017 |
| 0.1498 | 9.72 | 13600 | 0.2322 | 0.2025 |
| 0.1516 | 10.01 | 14000 | 0.2323 | 0.1971 |
| 0.1336 | 10.29 | 14400 | 0.2249 | 0.1920 |
| 0.134 | 10.58 | 14800 | 0.2258 | 0.2055 |
| 0.138 | 10.86 | 15200 | 0.2250 | 0.1906 |
| 0.13 | 11.15 | 15600 | 0.2423 | 0.1920 |
| 0.1302 | 11.44 | 16000 | 0.2294 | 0.1849 |
| 0.1253 | 11.72 | 16400 | 0.2193 | 0.1889 |
| 0.1219 | 12.01 | 16800 | 0.2350 | 0.1869 |
| 0.1149 | 12.29 | 17200 | 0.2350 | 0.1903 |
| 0.1161 | 12.58 | 17600 | 0.2277 | 0.1899 |
| 0.1129 | 12.87 | 18000 | 0.2416 | 0.1855 |
| 0.1091 | 13.15 | 18400 | 0.2289 | 0.1815 |
| 0.1073 | 13.44 | 18800 | 0.2383 | 0.1799 |
| 0.1135 | 13.72 | 19200 | 0.2306 | 0.1819 |
| 0.1075 | 14.01 | 19600 | 0.2283 | 0.1742 |
| 0.0971 | 14.3 | 20000 | 0.2271 | 0.1851 |
| 0.0967 | 14.58 | 20400 | 0.2395 | 0.1809 |
| 0.1039 | 14.87 | 20800 | 0.2286 | 0.1808 |
| 0.0984 | 15.15 | 21200 | 0.2303 | 0.1821 |
| 0.0922 | 15.44 | 21600 | 0.2254 | 0.1745 |
| 0.0882 | 15.73 | 22000 | 0.2280 | 0.1836 |
| 0.0859 | 16.01 | 22400 | 0.2355 | 0.1779 |
| 0.0832 | 16.3 | 22800 | 0.2347 | 0.1740 |
| 0.0854 | 16.58 | 23200 | 0.2342 | 0.1739 |
| 0.0874 | 16.87 | 23600 | 0.2316 | 0.1719 |
| 0.0808 | 17.16 | 24000 | 0.2291 | 0.1730 |
| 0.0741 | 17.44 | 24400 | 0.2308 | 0.1674 |
| 0.0815 | 17.73 | 24800 | 0.2329 | 0.1655 |
| 0.0764 | 18.01 | 25200 | 0.2514 | 0.1711 |
| 0.0719 | 18.3 | 25600 | 0.2275 | 0.1578 |
| 0.0665 | 18.58 | 26000 | 0.2367 | 0.1614 |
| 0.0693 | 18.87 | 26400 | 0.2185 | 0.1593 |
| 0.0662 | 19.16 | 26800 | 0.2266 | 0.1678 |
| 0.0612 | 19.44 | 27200 | 0.2332 | 0.1602 |
| 0.0623 | 19.73 | 27600 | 0.2283 | 0.1670 |
| 0.0659 | 20.01 | 28000 | 0.2142 | 0.1626 |
| 0.0581 | 20.3 | 28400 | 0.2198 | 0.1646 |
| 0.063 | 20.59 | 28800 | 0.2251 | 0.1588 |
| 0.0618 | 20.87 | 29200 | 0.2186 | 0.1554 |
| 0.0549 | 21.16 | 29600 | 0.2251 | 0.1490 |
| 0.058 | 21.44 | 30000 | 0.2366 | 0.1559 |
| 0.0543 | 21.73 | 30400 | 0.2262 | 0.1535 |
| 0.0529 | 22.02 | 30800 | 0.2358 | 0.1519 |
| 0.053 | 22.3 | 31200 | 0.2198 | 0.1513 |
| 0.0552 | 22.59 | 31600 | 0.2234 | 0.1503 |
| 0.0492 | 22.87 | 32000 | 0.2191 | 0.1516 |
| 0.0488 | 23.16 | 32400 | 0.2321 | 0.1500 |
| 0.0479 | 23.45 | 32800 | 0.2152 | 0.1420 |
| 0.0453 | 23.73 | 33200 | 0.2202 | 0.1453 |
| 0.0485 | 24.02 | 33600 | 0.2235 | 0.1468 |
| 0.0451 | 24.3 | 34000 | 0.2192 | 0.1455 |
| 0.041 | 24.59 | 34400 | 0.2138 | 0.1438 |
| 0.0435 | 24.87 | 34800 | 0.2335 | 0.1423 |
| 0.0404 | 25.16 | 35200 | 0.2220 | 0.1409 |
| 0.0374 | 25.45 | 35600 | 0.2366 | 0.1437 |
| 0.0405 | 25.73 | 36000 | 0.2233 | 0.1428 |
| 0.0385 | 26.02 | 36400 | 0.2208 | 0.1414 |
| 0.0373 | 26.3 | 36800 | 0.2265 | 0.1420 |
| 0.0365 | 26.59 | 37200 | 0.2174 | 0.1402 |
| 0.037 | 26.88 | 37600 | 0.2249 | 0.1397 |
| 0.0379 | 27.16 | 38000 | 0.2173 | 0.1374 |
| 0.0354 | 27.45 | 38400 | 0.2212 | 0.1381 |
| 0.034 | 27.73 | 38800 | 0.2313 | 0.1364 |
| 0.0347 | 28.02 | 39200 | 0.2230 | 0.1356 |
| 0.0318 | 28.31 | 39600 | 0.2231 | 0.1357 |
| 0.0305 | 28.59 | 40000 | 0.2281 | 0.1366 |
| 0.0307 | 28.88 | 40400 | 0.2259 | 0.1342 |
| 0.0315 | 29.16 | 40800 | 0.2252 | 0.1332 |
| 0.0314 | 29.45 | 41200 | 0.2218 | 0.1328 |
| 0.0307 | 29.74 | 41600 | 0.2206 | 0.1322 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.1+cu111
- Datasets 2.2.1
- Tokenizers 0.12.1
|
ferjeffQ/roberta-base-bne-finetuned-amazon_reviews_multi | 084870205dd5942a5e0854026d9bf4ed09fad5b8 | 2022-06-07T21:47:07.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ferjeffQ | null | ferjeffQ/roberta-base-bne-finetuned-amazon_reviews_multi | 3 | null | transformers | 22,547 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.9325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2207
- Accuracy: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1937 | 1.0 | 1250 | 0.1811 | 0.9327 |
| 0.1005 | 2.0 | 2500 | 0.2207 | 0.9325 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
cammy/wa2vec2-5epochs | 89e321db209cf56be92ae33956a7d5126714af41 | 2022-06-08T03:41:41.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | null | false | cammy | null | cammy/wa2vec2-5epochs | 3 | null | transformers | 22,548 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wa2vec2-5epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wa2vec2-5epochs
This model is a fine-tuned version of [lighteternal/wav2vec2-large-xlsr-53-greek](https://huggingface.co/lighteternal/wav2vec2-large-xlsr-53-greek) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3049
- Accuracy: 0.9282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 454 | 0.7179 | 0.7599 |
| 0.6962 | 2.0 | 908 | 0.3806 | 0.8911 |
| 0.3776 | 3.0 | 1362 | 0.3299 | 0.9109 |
| 0.2071 | 4.0 | 1816 | 0.3021 | 0.9257 |
| 0.1262 | 5.0 | 2270 | 0.3049 | 0.9282 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
Vlasta/randomWeightsBert | 9153a688af0ad1dff6457a80e9c5e4a61c50897e | 2022-06-08T09:41:10.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Vlasta | null | Vlasta/randomWeightsBert | 3 | null | transformers | 22,549 | Entry not found |
joshanashakya/codebert_sourcecode_nmt_ja2pn_50E_2e-05LR | 20cdebd033c1147c8e845458acca2883570e2581 | 2022-06-08T12:23:01.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | joshanashakya | null | joshanashakya/codebert_sourcecode_nmt_ja2pn_50E_2e-05LR | 3 | null | transformers | 22,550 | Entry not found |
mmillet/rubert-base-cased_best_finetuned_emotion_experiment_augmented_anger_fear | 0c29a32332be02e6a77ac8e272b01ce9db1cf390 | 2022-06-08T15:34:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | mmillet | null | mmillet/rubert-base-cased_best_finetuned_emotion_experiment_augmented_anger_fear | 3 | null | transformers | 22,551 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: rubert-base-cased_best_finetuned_emotion_experiment_augmented_anger_fear
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-cased_best_finetuned_emotion_experiment_augmented_anger_fear
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4568
- Accuracy: 0.8779
- F1: 0.8777
- Precision: 0.8780
- Recall: 0.8779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=0.0001
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.2647 | 1.0 | 69 | 1.0075 | 0.6013 | 0.5671 | 0.6594 | 0.6013 |
| 0.9091 | 2.0 | 138 | 0.7853 | 0.7171 | 0.7138 | 0.7169 | 0.7171 |
| 0.7305 | 3.0 | 207 | 0.6264 | 0.7829 | 0.7811 | 0.7835 | 0.7829 |
| 0.5446 | 4.0 | 276 | 0.4571 | 0.8466 | 0.8465 | 0.8470 | 0.8466 |
| 0.4039 | 5.0 | 345 | 0.4035 | 0.8612 | 0.8606 | 0.8612 | 0.8612 |
| 0.3144 | 6.0 | 414 | 0.3800 | 0.8653 | 0.8653 | 0.8665 | 0.8653 |
| 0.2711 | 7.0 | 483 | 0.3731 | 0.8674 | 0.8673 | 0.8677 | 0.8674 |
| 0.2289 | 8.0 | 552 | 0.4041 | 0.8737 | 0.8728 | 0.8746 | 0.8737 |
| 0.1944 | 9.0 | 621 | 0.4002 | 0.8789 | 0.8785 | 0.8793 | 0.8789 |
| 0.171 | 10.0 | 690 | 0.3939 | 0.8831 | 0.8827 | 0.8839 | 0.8831 |
| 0.138 | 11.0 | 759 | 0.4106 | 0.8758 | 0.8754 | 0.8761 | 0.8758 |
| 0.1141 | 12.0 | 828 | 0.4200 | 0.8810 | 0.8803 | 0.8804 | 0.8810 |
| 0.1141 | 13.0 | 897 | 0.4426 | 0.8758 | 0.8756 | 0.8763 | 0.8758 |
| 0.0961 | 14.0 | 966 | 0.4494 | 0.8758 | 0.8754 | 0.8761 | 0.8758 |
| 0.0812 | 15.0 | 1035 | 0.4568 | 0.8779 | 0.8777 | 0.8780 | 0.8779 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Splend1dchan/wav2vec2-large-lv60_byt5-small_nofreeze_bs64_2 | cf0566fd1f9e7fc37d835673e75fdb74b8ebcd2d | 2022-06-10T13:23:26.000Z | [
"pytorch",
"speechmix",
"transformers"
] | null | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-lv60_byt5-small_nofreeze_bs64_2 | 3 | null | transformers | 22,552 | Entry not found |
ahmeddbahaa/mT5_multilingual_XLSum-finetuned-fa-finetuned-ar | ffddd44c2a41d1841ae7170f54c9e09931e6ba49 | 2022-06-08T22:22:19.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:xlsum",
"transformers",
"summarization",
"Abstractive Summarization",
"ar",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | summarization | false | ahmeddbahaa | null | ahmeddbahaa/mT5_multilingual_XLSum-finetuned-fa-finetuned-ar | 3 | null | transformers | 22,553 | ---
tags:
- mt5
- summarization
- Abstractive Summarization
- ar
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: mT5_multilingual_XLSum-finetuned-fa-finetuned-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-fa-finetuned-ar
This model is a fine-tuned version of [ahmeddbahaa/mT5_multilingual_XLSum-finetuned-fa](https://huggingface.co/ahmeddbahaa/mT5_multilingual_XLSum-finetuned-fa) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6352
- Rouge-1: 28.69
- Rouge-2: 11.6
- Rouge-l: 24.29
- Gen Len: 41.37
- Bertscore: 73.37
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ghadeermobasher/WLT-SciBERT-BC5CDR-Disease | 74823dda9362271461cfd6afabe9cbe0096fee09 | 2022-06-09T11:23:22.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/WLT-SciBERT-BC5CDR-Disease | 3 | null | transformers | 22,554 | Entry not found |
ajtamayoh/NLP-CIC-WFU_Clinical_Cases_NER_Sents_Tokenized_bertin_roberta_base_spanish_fine_tuned | 857e6cb663ab08198dd3b2dcd84846f50bc099c3 | 2022-06-09T17:15:48.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ajtamayoh | null | ajtamayoh/NLP-CIC-WFU_Clinical_Cases_NER_Sents_Tokenized_bertin_roberta_base_spanish_fine_tuned | 3 | null | transformers | 22,555 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NLP-CIC-WFU_Clinical_Cases_NER_Sents_Tokenized_bertin_roberta_base_spanish_fine_tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-CIC-WFU_Clinical_Cases_NER_Sents_Tokenized_bertin_roberta_base_spanish_fine_tuned
This model is a fine-tuned version of [bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0973
- Precision: 0.9012
- Recall: 0.6942
- F1: 0.7842
- Accuracy: 0.9857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0605 | 1.0 | 2568 | 0.0625 | 0.9400 | 0.6322 | 0.7560 | 0.9836 |
| 0.0475 | 2.0 | 5136 | 0.0622 | 0.9533 | 0.6572 | 0.7781 | 0.9849 |
| 0.0374 | 3.0 | 7704 | 0.0552 | 0.9261 | 0.6784 | 0.7831 | 0.9855 |
| 0.0246 | 4.0 | 10272 | 0.0693 | 0.9381 | 0.6658 | 0.7788 | 0.9849 |
| 0.0126 | 5.0 | 12840 | 0.0974 | 0.8918 | 0.6830 | 0.7735 | 0.9849 |
| 0.0061 | 6.0 | 15408 | 0.0886 | 0.8771 | 0.7099 | 0.7847 | 0.9850 |
| 0.0031 | 7.0 | 17976 | 0.0973 | 0.9012 | 0.6942 | 0.7842 | 0.9857 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
helliun/primary-secondary | 7d7a4e641d05f609c6083c5d49b9e8f6528c6a3f | 2022-06-09T18:49:37.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | helliun | null | helliun/primary-secondary | 3 | null | transformers | 22,556 | Entry not found |
simecek/ZebrafishDNADeberta | ac137e5049b4552c2139247d0598299eb1973137 | 2022-06-10T05:01:11.000Z | [
"pytorch",
"tensorboard",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simecek | null | simecek/ZebrafishDNADeberta | 3 | null | transformers | 22,557 | Entry not found |
BigSalmon/InformalToFormalLincoln51 | 2b8f4e64f22de673e9832af1a2df81ea85fb6363 | 2022-06-10T02:22:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln51 | 3 | null | transformers | 22,558 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln51")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln51")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
``` |
huggingtweets/mcdonalds | 61a17b5207f82f4ac9b4892aa3e278223251cff9 | 2022-06-10T03:58:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/mcdonalds | 3 | null | transformers | 22,559 | ---
language: en
thumbnail: http://www.huggingtweets.com/mcdonalds/1654833493693/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1513918204325728257/5-R-x-P__400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">McDonald's</div>
<div style="text-align: center; font-size: 14px;">@mcdonalds</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from McDonald's.
| Data | McDonald's |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 0 |
| Short tweets | 15 |
| Tweets kept | 3235 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2pc5eknt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mcdonalds's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2utcnhg8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2utcnhg8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mcdonalds')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
flood/distilbert-base-uncased-finetuned-clinc | 0943ed19cec73dd587924eb9a2216675aef056ca | 2022-06-10T07:21:47.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | flood | null | flood/distilbert-base-uncased-finetuned-clinc | 3 | null | transformers | 22,560 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9161290322580645
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7793
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2926 | 1.0 | 318 | 3.2834 | 0.7374 |
| 2.6259 | 2.0 | 636 | 1.8736 | 0.8303 |
| 1.5511 | 3.0 | 954 | 1.1612 | 0.8913 |
| 1.0185 | 4.0 | 1272 | 0.8625 | 0.91 |
| 0.8046 | 5.0 | 1590 | 0.7793 | 0.9161 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
KoichiYasuoka/deberta-large-japanese-unidic | 9675b5636968956a2deaee015a815dcee45dc4d5 | 2022-06-19T00:15:35.000Z | [
"pytorch",
"deberta-v2",
"fill-mask",
"ja",
"transformers",
"japanese",
"masked-lm",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/deberta-large-japanese-unidic | 3 | null | transformers | 22,561 | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "日本に着いたら[MASK]を訪ねなさい。"
---
# deberta-large-japanese-unidic
## Model Description
This is a DeBERTa(V2) model pre-trained on 青空文庫 texts with BertJapaneseTokenizer. You can fine-tune `deberta-large-japanese-unidic` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-unidic-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-unidic-ud-head), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic")
```
[fugashi](https://pypi.org/project/fugashi) and [unidic-lite](https://pypi.org/project/unidic-lite) are required.
|
ajsmith201/t5-base-finetuned-bias-99c3c657 | 8b306affda3a81d7e79a427b015e93ad75fe9898 | 2022-06-10T13:27:13.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ajsmith201 | null | ajsmith201/t5-base-finetuned-bias-99c3c657 | 3 | null | transformers | 22,562 | Entry not found |
Splend1dchan/wav2vec2-large-lv60_byt5-small_textdecoderonly_bs64 | 26bd9af05fabf879a0eddbe7151c0886fb5e1ff7 | 2022-06-13T02:46:21.000Z | [
"pytorch",
"speechmix",
"transformers"
] | null | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-lv60_byt5-small_textdecoderonly_bs64 | 3 | null | transformers | 22,563 | Entry not found |
LDD/bert_wwm_new | 6fc8434c34857affa6fccf925f1d7c3902e05518 | 2022-06-14T05:44:42.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | LDD | null | LDD/bert_wwm_new | 3 | null | transformers | 22,564 | 在chinese-bert-wwm的基础上进行新闻语料库的增量预训练,token采用的是hfl/chinese-bert-wwm-ext |
edumunozsala/vit_base-224-in21k-ft-cifar100 | 130d4381d7783d872a65ff6bcb77101cb92ea5f9 | 2022-07-29T09:20:17.000Z | [
"pytorch",
"vit",
"image-classification",
"es",
"dataset:cifar100",
"arxiv:2006.03677",
"transformers",
"sagemaker",
"ImageClassification",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | edumunozsala | null | edumunozsala/vit_base-224-in21k-ft-cifar100 | 3 | null | transformers | 22,565 | ---
language: es
tags:
- sagemaker
- vit
- ImageClassification
- generated_from_trainer
license: apache-2.0
datasets:
- cifar100
metrics:
- accuracy
model-index:
- name: vit_base-224-in21k-ft-cifar100
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: "Cifar100"
type: cifar100
metrics:
- name: Accuracy
type: accuracy
value: 0.9148
---
# Model vit_base-224-in21k-ft-cifar100
## **A finetuned model for Image classification in Spanish**
This model was trained using Amazon SageMaker and the Hugging Face Deep Learning container,
The base model is **Vision Transformer (base-sized model)** which is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.[Link to base model](https://huggingface.co/google/vit-base-patch16-224-in21k)
## Base model citation
### BibTeX entry and citation info
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Dataset
[Link to dataset description](http://www.cs.toronto.edu/~kriz/cifar.html)
The CIFAR-10 and CIFAR-100 are labeled subsets of the 80 million tiny images dataset. They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
This dataset,CIFAR100, is just like the CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a "fine" label (the class to which it belongs) and a "coarse" label (the superclass to which it belongs).
Sizes of datasets:
- Train dataset: 50,000
- Test dataset: 10,000
## Intended uses & limitations
This model is intented for Image Classification.
## Hyperparameters
{
"epochs": "5",
"train_batch_size": "32",
"eval_batch_size": "8",
"fp16": "true",
"learning_rate": "1e-05",
}
## Test results
- Accuracy = 0.9148
## Model in action
### Usage for Image Classification
```python
from transformers import ViTFeatureExtractor, ViTModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k')
model = ViTModel.from_pretrained('edumunozsala/vit_base-224-in21k-ft-cifar100')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
Created by [Eduardo Muñoz/@edumunozsala](https://github.com/edumunozsala)
|
Sebabrata/lmv2ubiai-pan8doc-06-11 | 0daa25e506ec555fc846d108080d29e930218b99 | 2022-06-11T12:25:03.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Sebabrata | null | Sebabrata/lmv2ubiai-pan8doc-06-11 | 3 | null | transformers | 22,566 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: lmv2ubiai-pan8doc-06-11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmv2ubiai-pan8doc-06-11
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9633
- Dob Precision: 1.0
- Dob Recall: 1.0
- Dob F1: 1.0
- Dob Number: 2
- Fname Precision: 0.6667
- Fname Recall: 1.0
- Fname F1: 0.8
- Fname Number: 2
- Name Precision: 1.0
- Name Recall: 1.0
- Name F1: 1.0
- Name Number: 2
- Pan Precision: 1.0
- Pan Recall: 1.0
- Pan F1: 1.0
- Pan Number: 2
- Overall Precision: 0.8889
- Overall Recall: 1.0
- Overall F1: 0.9412
- Overall Accuracy: 0.9821
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Dob Precision | Dob Recall | Dob F1 | Dob Number | Fname Precision | Fname Recall | Fname F1 | Fname Number | Name Precision | Name Recall | Name F1 | Name Number | Pan Precision | Pan Recall | Pan F1 | Pan Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|:----------:|:------:|:----------:|:---------------:|:------------:|:--------:|:------------:|:--------------:|:-----------:|:-------:|:-----------:|:-------------:|:----------:|:------:|:----------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 2.1195 | 1.0 | 6 | 1.7519 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 0.7857 |
| 1.6994 | 2.0 | 12 | 1.5117 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 0.7857 |
| 1.5521 | 3.0 | 18 | 1.4130 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 0.7857 |
| 1.4726 | 4.0 | 24 | 1.3410 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 0.7857 |
| 1.395 | 5.0 | 30 | 1.2693 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 0.7857 |
| 1.3131 | 6.0 | 36 | 1.2079 | 1.0 | 1.0 | 1.0 | 2 | 0.1667 | 0.5 | 0.25 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 2 | 0.3 | 0.375 | 0.3333 | 0.8929 |
| 1.2474 | 7.0 | 42 | 1.1495 | 1.0 | 1.0 | 1.0 | 2 | 0.2 | 0.5 | 0.2857 | 2 | 0.0 | 0.0 | 0.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.4167 | 0.625 | 0.5 | 0.9286 |
| 1.1869 | 8.0 | 48 | 1.0942 | 1.0 | 1.0 | 1.0 | 2 | 0.2 | 0.5 | 0.2857 | 2 | 0.0 | 0.0 | 0.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.4167 | 0.625 | 0.5 | 0.9286 |
| 1.1369 | 9.0 | 54 | 1.0453 | 1.0 | 1.0 | 1.0 | 2 | 0.4 | 1.0 | 0.5714 | 2 | 0.0 | 0.0 | 0.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5455 | 0.75 | 0.6316 | 0.9464 |
| 1.0882 | 10.0 | 60 | 1.0054 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 1.0 | 0.6667 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.7 | 0.875 | 0.7778 | 0.9643 |
| 1.0482 | 11.0 | 66 | 0.9633 | 1.0 | 1.0 | 1.0 | 2 | 0.6667 | 1.0 | 0.8 | 2 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.8889 | 1.0 | 0.9412 | 0.9821 |
| 1.017 | 12.0 | 72 | 0.9368 | 1.0 | 1.0 | 1.0 | 2 | 0.6667 | 1.0 | 0.8 | 2 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.8889 | 1.0 | 0.9412 | 0.9643 |
| 0.9825 | 13.0 | 78 | 0.9139 | 1.0 | 1.0 | 1.0 | 2 | 0.6667 | 1.0 | 0.8 | 2 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.8889 | 1.0 | 0.9412 | 0.9821 |
| 0.9459 | 14.0 | 84 | 0.8837 | 1.0 | 1.0 | 1.0 | 2 | 0.6667 | 1.0 | 0.8 | 2 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.8889 | 1.0 | 0.9412 | 0.9643 |
| 0.9155 | 15.0 | 90 | 0.8472 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.8819 | 16.0 | 96 | 0.8231 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.8523 | 17.0 | 102 | 0.7957 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.6667 | 1.0 | 0.8 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.8889 | 1.0 | 0.9412 | 0.9821 |
| 0.8251 | 18.0 | 108 | 0.7681 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.7982 | 19.0 | 114 | 0.7533 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.7762 | 20.0 | 120 | 0.7283 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.7558 | 21.0 | 126 | 0.7114 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.7346 | 22.0 | 132 | 0.6889 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.7116 | 23.0 | 138 | 0.6697 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.6898 | 24.0 | 144 | 0.6593 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.6748 | 25.0 | 150 | 0.6356 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.6487 | 26.0 | 156 | 0.6142 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.6312 | 27.0 | 162 | 0.6008 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.6156 | 28.0 | 168 | 0.5855 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.5961 | 29.0 | 174 | 0.5625 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
| 0.5781 | 30.0 | 180 | 0.5553 | 1.0 | 1.0 | 1.0 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.5 | 0.5 | 0.5 | 2 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.875 | 0.875 | 0.9643 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
LDD/bert_mlm_new2 | 21c6bb13976c3ca352397e0b62dad7ab6cf3c1f9 | 2022-06-14T05:43:18.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | LDD | null | LDD/bert_mlm_new2 | 3 | null | transformers | 22,567 | 在bert-base-chinese基础上进行新闻语料库的增量预训练的模型,token采用的是bert-base-chinese
Model
模型导出时将生成 config.json 和 pytorch_model.bin 参数文件
Tokenizer
这是一个将纯文本转换为编码的过程。注意,Tokenizer 并不涉及将词转化为词向量的过程,仅仅是将纯文本分词,添加[MASK]标记、[SEP]、[CLS]标记,并转换为字典索引。Tokenizer 类导出时将分为三个文件
vocab.txt 词典文件,每一行为一个词或词的一部分
special_tokens_map.json 特殊标记的定义方式
tokenizer_config.json 配置文件,主要存储特殊的配置
模型的所有分词器都是在 PreTrainedTokenizer 中实现的,分词的结果主要有以下内容:
"input ids": 顾名思义,是单词在词典中的编码
"token type ids":区分两个句子的编码
"attention mask":指定对哪些词进行self-Attention操作
"overflowing tokens":当指定最大长度时,溢出的单词
"num truncated tokens":溢出的token数量
"return special tokens mask":如果添加特殊标记,则这是[0,1]的列表,其中0指定特殊添加的标记,而1指定序列标记 |
evangeloc/t5-small-finetuned-xsum | 4a260c27711ff90e39fe343cc17058177de7ddec | 2022-06-12T07:32:54.000Z | [
"pytorch",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_keras_callback",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | evangeloc | null | evangeloc/t5-small-finetuned-xsum | 3 | null | transformers | 22,568 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: evangeloc/t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# evangeloc/t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.7203
- Validation Loss: 2.4006
- Train Rouge1: 28.1689
- Train Rouge2: 7.9798
- Train Rougel: 22.6998
- Train Rougelsum: 22.7228
- Train Gen Len: 18.865
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 2.7203 | 2.4006 | 28.1689 | 7.9798 | 22.6998 | 22.7228 | 18.865 | 0 |
### Framework versions
- Transformers 4.19.4
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
panapelli/RobertaModel | 6b8a3b160641e25cdc974cce4c532450d60f5ea8 | 2022-06-11T21:31:03.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | panapelli | null | panapelli/RobertaModel | 3 | null | transformers | 22,569 | Entry not found |
Averium/DialoGPT-medium-TailsBot | 62ad9b8db7bced13974a8625e16d8a35fa59fd41 | 2022-06-16T23:58:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Averium | null | Averium/DialoGPT-medium-TailsBot | 3 | null | transformers | 22,570 | ---
tags:
- conversational
---
# Miles Prower DialoGPT Model |
AnonymousSub/fpdm_roberta_soup_model | 7bfa68e60415ddf4bda976f917d7f8b7842c5ec4 | 2022-06-12T13:33:34.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/fpdm_roberta_soup_model | 3 | null | transformers | 22,571 | Entry not found |
AnonymousSub/fpdm_roberta_soup_model_squad2.0 | 0cfadf712be202c37267b4cc68c301bb25b3185d | 2022-06-12T15:16:32.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/fpdm_roberta_soup_model_squad2.0 | 3 | null | transformers | 22,572 | Entry not found |
anu24/distilbert-base-uncased-finetuned-squad | be097acb07caa19b5e5f98b21f74165f72f25dfc | 2022-07-10T14:24:22.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anu24 | null | anu24/distilbert-base-uncased-finetuned-squad | 3 | null | transformers | 22,573 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2109 | 1.0 | 5533 | 1.1506 |
| 0.9581 | 2.0 | 11066 | 1.1300 |
| 0.7508 | 3.0 | 16599 | 1.1503 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
IDEA-CCNL/Taiyi-Roberta-124M-D-v2 | ed7cbcdde9fe0a920f51d6599183950a330b410c | 2022-06-14T01:49:51.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"en",
"transformers",
"mutlimodal",
"exbert",
"license:apache-2.0"
] | feature-extraction | false | IDEA-CCNL | null | IDEA-CCNL/Taiyi-Roberta-124M-D-v2 | 3 | null | transformers | 22,574 | ---
language:
- en
license: apache-2.0
tags:
- roberta
- mutlimodal
- exbert
inference: false
---
# Taiyi-Roberta-124M-D-v2 model (English)
Based on pre-trained Roberta-base, we introduce multimodal information.
For multimodal pre-training tasks, we design several special training objectives in our paper.
Our code and details of pre-training tasks will be made publicly available upon paper acceptance.
This is the second version of Taiyi-Roberta-124M-D.
The pre-training datasets are MSCOCO, VG and SBU. "D" implies a special training method.
# Taiyi (太乙)
Taiyi models are a branch of the Fengshenbang (封神榜) series of models.
The models in Taiyi are pre-trained with multimodal pre-training strategies.
# Usage
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained("IDEA-CCNL/Taiyi-Roberta-124M-D-v2")
model = RobertaModel.from_pretrained("IDEA-CCNL/Taiyi-Roberta-124M-D-v2")
```
# GLUE
| Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE |
|---------------------------------|------|------|------|-------|------|-------|------|------|
| Robert-base (official) | 87.6 | 91.9 | 92.8 | 94.8 | 63.6 | 91.2 | 90.2 | 78.7 |
| Roberta-base (local) | 87.0 | 91.3 | 92.5 | 94.2 | 62.8 | 90.6 | 92.9 | 78.0 |
| Taiyi-Roberta-124M-D (local) | 87.1 | 91.8 | 92.3 | 94.5 | 62.6 | 90.4 | 92.4 | 78.7 |
| Taiyi-Roberta-124M-D-v2 (local) | 87.1 | 91.9 | 92.4 | 94.5 | 65.5 | 91.0 | 93.0 | 79.8 |
The local test settings are:
Sequence length: 128, Batch size: 32, Learning rate: 3e-5
# Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2022},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
Splend1dchan/wav2vec2-large-lv60_t5lephone-small_textdecoderonly_bs64 | a51dc78a996ce23ed34c014c6938be9d0919e688 | 2022-06-15T01:37:27.000Z | [
"pytorch",
"speechmix",
"transformers"
] | null | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-lv60_t5lephone-small_textdecoderonly_bs64 | 3 | null | transformers | 22,575 | Entry not found |
PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.6_topk30_epoch3 | 609abb64e337f0c4a565c3c2ddf3a07c5c90a16c | 2022-06-13T08:10:42.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.6_topk30_epoch3 | 3 | null | transformers | 22,576 | Entry not found |
PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.6_topk20_epoch3 | a33c763246d722d5888fde44a0b175730b1533a5 | 2022-06-13T09:43:59.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.6_topk20_epoch3 | 3 | null | transformers | 22,577 | Entry not found |
PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.5_topk50_epoch3 | 172318d9d2bea3bcfe07bc3208170e4061a449ad | 2022-06-13T11:18:05.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.5_topk50_epoch3 | 3 | null | transformers | 22,578 | Entry not found |
PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.5_topk40_epoch3 | f23e69b0efe9742b87c131f0e921279e1d445074 | 2022-06-13T12:51:35.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.5_topk40_epoch3 | 3 | null | transformers | 22,579 | Entry not found |
JeremiahZ/roberta-base-mrpc | 9cae29b5783a394582ff96ac29a3c6a8e0a4b4fc | 2022-06-13T13:49:41.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | JeremiahZ | null | JeremiahZ/roberta-base-mrpc | 3 | null | transformers | 22,580 | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.9019607843137255
- name: F1
type: f1
value: 0.9295774647887324
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mrpc
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4898
- Accuracy: 0.9020
- F1: 0.9296
- Combined Score: 0.9158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.5_topk30_epoch3 | 369bf822b841b77c8877f6b67acb23af7b3680e8 | 2022-06-13T14:24:56.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.5_topk30_epoch3 | 3 | null | transformers | 22,581 | Entry not found |
Fdu4e/oryzhach | 0380c05955df57e24c1c84cc0b6f258ef703bd21 | 2022-06-14T18:07:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Fdu4e | null | Fdu4e/oryzhach | 3 | null | transformers | 22,582 | Entry not found |
eslamxm/mt5-base-finetuned-en-cnn | 52a35a57022c5fa98253b57a1596714d97ce925c | 2022-06-14T06:15:18.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"summarization",
"en",
"Abstractive Summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | eslamxm | null | eslamxm/mt5-base-finetuned-en-cnn | 3 | null | transformers | 22,583 | ---
license: apache-2.0
tags:
- summarization
- en
- mt5
- Abstractive Summarization
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: mt5-base-finetuned-en-cnn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-en-cnn
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1286
- Rouge-1: 22.84
- Rouge-2: 10.11
- Rouge-l: 21.8
- Gen Len: 19.0
- Bertscore: 87.12
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
JeremiahZ/roberta-base-mnli | a852739baf468858f8a0227cca4b164bbca9b932 | 2022-06-14T03:59:44.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | JeremiahZ | null | JeremiahZ/roberta-base-mnli | 3 | null | transformers | 22,584 | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: roberta-base-mnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-mnli
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base/) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7539
- eval_accuracy: 0.8697
- eval_runtime: 25.5655
- eval_samples_per_second: 384.581
- eval_steps_per_second: 48.073
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10.0
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
micamorales/bertin-NLI-abs | d89c9048515c81f80345afcd0b220b580b987a72 | 2022-06-14T18:53:59.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | micamorales | null | micamorales/bertin-NLI-abs | 3 | null | transformers | 22,585 | Entry not found |
jkhan447/sarcasm-detection-RoBerta-base-CR-POS | 1757cd5120442c728e3bc6d51a860bac59f47a52 | 2022-06-14T16:55:38.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | jkhan447 | null | jkhan447/sarcasm-detection-RoBerta-base-CR-POS | 3 | null | transformers | 22,586 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sarcasm-detection-RoBerta-base-CR-POS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sarcasm-detection-RoBerta-base-CR-POS
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6933
- Accuracy: 0.4977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
JeremiahZ/roberta-base-cola | f283bc9ccbfbdd4b0433d3a3a5805d3ab8d7954f | 2022-06-14T08:52:16.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | JeremiahZ | null | JeremiahZ/roberta-base-cola | 3 | null | transformers | 22,587 | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: roberta-base-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.6232164195970928
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-cola
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0571
- Matthews Correlation: 0.6232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5497 | 1.0 | 535 | 0.5504 | 0.4613 |
| 0.3786 | 2.0 | 1070 | 0.4850 | 0.5470 |
| 0.2733 | 3.0 | 1605 | 0.5036 | 0.5792 |
| 0.2204 | 4.0 | 2140 | 0.5532 | 0.6139 |
| 0.164 | 5.0 | 2675 | 0.9516 | 0.5934 |
| 0.1351 | 6.0 | 3210 | 0.9051 | 0.5754 |
| 0.1065 | 7.0 | 3745 | 0.9006 | 0.6161 |
| 0.0874 | 8.0 | 4280 | 0.9457 | 0.6157 |
| 0.0579 | 9.0 | 4815 | 1.0372 | 0.6007 |
| 0.0451 | 10.0 | 5350 | 1.0571 | 0.6232 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
JeremiahZ/roberta-base-rte | 9a2ec0a50256a4bfe23bef2c30f41e0b65c88432 | 2022-06-20T14:02:32.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | JeremiahZ | null | JeremiahZ/roberta-base-rte | 3 | null | transformers | 22,588 | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: roberta-base-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.7978339350180506
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-rte
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5446
- Accuracy: 0.7978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.7023 | 0.4729 |
| No log | 2.0 | 312 | 0.6356 | 0.6895 |
| No log | 3.0 | 468 | 0.5177 | 0.7617 |
| 0.6131 | 4.0 | 624 | 0.6238 | 0.7473 |
| 0.6131 | 5.0 | 780 | 0.5446 | 0.7978 |
| 0.6131 | 6.0 | 936 | 0.9697 | 0.7545 |
| 0.2528 | 7.0 | 1092 | 1.1004 | 0.7690 |
| 0.2528 | 8.0 | 1248 | 1.1937 | 0.7726 |
| 0.2528 | 9.0 | 1404 | 1.3313 | 0.7726 |
| 0.1073 | 10.0 | 1560 | 1.3534 | 0.7726 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
JeremiahZ/roberta-base-stsb | a74276c24d98c8fa35e52248feadcea51e4e519f | 2022-06-14T10:05:52.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | JeremiahZ | null | JeremiahZ/roberta-base-stsb | 3 | null | transformers | 22,589 | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: roberta-base-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.907904999413384
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-stsb
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4155
- Pearson: 0.9101
- Spearmanr: 0.9079
- Combined Score: 0.9090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| No log | 1.0 | 360 | 0.6202 | 0.8787 | 0.8813 | 0.8800 |
| 1.6425 | 2.0 | 720 | 0.4864 | 0.9008 | 0.8992 | 0.9000 |
| 0.3629 | 3.0 | 1080 | 0.4201 | 0.9043 | 0.9016 | 0.9030 |
| 0.3629 | 4.0 | 1440 | 0.4686 | 0.9052 | 0.9003 | 0.9027 |
| 0.2212 | 5.0 | 1800 | 0.4622 | 0.9061 | 0.9031 | 0.9046 |
| 0.1556 | 6.0 | 2160 | 0.3952 | 0.9086 | 0.9065 | 0.9075 |
| 0.1162 | 7.0 | 2520 | 0.4271 | 0.9081 | 0.9070 | 0.9075 |
| 0.1162 | 8.0 | 2880 | 0.4169 | 0.9094 | 0.9075 | 0.9085 |
| 0.0887 | 9.0 | 3240 | 0.4383 | 0.9091 | 0.9074 | 0.9083 |
| 0.0717 | 10.0 | 3600 | 0.4155 | 0.9101 | 0.9079 | 0.9090 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Yama/yamaen | 788161b3a9874cefbaa98af368d59097f41c9cc8 | 2022-06-14T11:35:49.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Yama | null | Yama/yamaen | 3 | null | transformers | 22,590 | Entry not found |
tuni/distilbert-base-uncased-finetuned-mnli | 0826761951a4a76e8fff0242402eed6f13ae9624 | 2022-06-15T12:57:52.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | tuni | null | tuni/distilbert-base-uncased-finetuned-mnli | 3 | null | transformers | 22,591 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8204788588894549
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mnli
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6574
- Accuracy: 0.8205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.5188 | 1.0 | 24544 | 0.4979 | 0.8047 |
| 0.4153 | 2.0 | 49088 | 0.4845 | 0.8147 |
| 0.3008 | 3.0 | 73632 | 0.5631 | 0.8204 |
| 0.2226 | 4.0 | 98176 | 0.6574 | 0.8205 |
| 0.189 | 5.0 | 122720 | 0.8209 | 0.8194 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
Splend1dchan/wav2vec2-large-lv60_t5lephonev2-small_textdecoderonly_bs64 | 91d01c8931b0ae4cfcf728a5f952a474905e5efc | 2022-06-16T00:09:29.000Z | [
"pytorch",
"speechmix",
"transformers"
] | null | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-lv60_t5lephonev2-small_textdecoderonly_bs64 | 3 | null | transformers | 22,592 | Entry not found |
HrayrMSint/bert-base-uncased-issues-128 | 3ef2b7cdfe6084e75c7b1a1a6c08db87cf79d240 | 2022-06-15T10:29:34.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | HrayrMSint | null | HrayrMSint/bert-base-uncased-issues-128 | 3 | null | transformers | 22,593 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0987 | 1.0 | 291 | 1.6066 |
| 1.631 | 2.0 | 582 | 1.4775 |
| 1.4933 | 3.0 | 873 | 1.4646 |
| 1.3984 | 4.0 | 1164 | 1.3314 |
| 1.3377 | 5.0 | 1455 | 1.3122 |
| 1.274 | 6.0 | 1746 | 1.2062 |
| 1.2538 | 7.0 | 2037 | 1.2626 |
| 1.192 | 8.0 | 2328 | 1.1832 |
| 1.1612 | 9.0 | 2619 | 1.2055 |
| 1.1489 | 10.0 | 2910 | 1.1605 |
| 1.1262 | 11.0 | 3201 | 1.1925 |
| 1.1022 | 12.0 | 3492 | 1.1309 |
| 1.0892 | 13.0 | 3783 | 1.1692 |
| 1.0812 | 14.0 | 4074 | 1.2384 |
| 1.0666 | 15.0 | 4365 | 1.0822 |
| 1.0533 | 16.0 | 4656 | 1.2432 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0
- Datasets 2.2.2
- Tokenizers 0.10.3
|
jkhan447/sarcasm-detection-Bert-base-uncased-CR-POS | 127b9c92421e8fce22480490d4090cdb438dedfd | 2022-06-15T12:59:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jkhan447 | null | jkhan447/sarcasm-detection-Bert-base-uncased-CR-POS | 3 | null | transformers | 22,594 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sarcasm-detection-Bert-base-uncased-CR-POS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sarcasm-detection-Bert-base-uncased-CR-POS
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1816
- Accuracy: 0.5783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
jhmin/bert-base-uncased-emotion | beaaf0b58c0a949498d71f41713451e522d7dc8c | 2022-06-15T09:44:02.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | jhmin | null | jhmin/bert-base-uncased-emotion | 3 | null | transformers | 22,595 | Entry not found |
erickfm/fiery-sweep-4 | e8c04373496c24ea11defbe6708d743ffb86cdc3 | 2022-06-15T12:21:29.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/fiery-sweep-4 | 3 | null | transformers | 22,596 | Entry not found |
erickfm/vibrant-sweep-5 | 4847c687c8ac7d3c353df86b5f6c7651d0ee32f0 | 2022-06-15T15:03:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/vibrant-sweep-5 | 3 | null | transformers | 22,597 | Entry not found |
erickfm/chocolate-sweep-7 | 67f17c1b3a3e24375e4fa926e616ec01136a2941 | 2022-06-15T18:10:36.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/chocolate-sweep-7 | 3 | null | transformers | 22,598 | Entry not found |
Splend1dchan/wav2vec2-large-lv60_mt5-small_textdecoderonly_bs64 | e120e9d174d8cd63c7cafe5756fbd7969db71451 | 2022-06-17T18:16:55.000Z | [
"pytorch",
"speechmix",
"transformers"
] | null | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-lv60_mt5-small_textdecoderonly_bs64 | 3 | null | transformers | 22,599 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.