modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
gary109/ai-light-dance_pretrain2_wav2vec2-large-xlsr-53 | ae2de075775ecb6c8ca56777b13cbbb7ee16de53 | 2022-07-22T00:15:32.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"transformers"
]
| null | false | gary109 | null | gary109/ai-light-dance_pretrain2_wav2vec2-large-xlsr-53 | 7 | null | transformers | 14,600 | Entry not found |
Aktsvigun/bart-base_aeslc_8653685 | 77387b5efdd19a28d9584430a84db1d98d28dfa9 | 2022-07-07T15:31:54.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_aeslc_8653685 | 7 | null | transformers | 14,601 | Entry not found |
Aktsvigun/bart-base_aeslc_4065329 | e6a7919a2b3f695b3b1373f7f144ebc5150e5b9e | 2022-07-07T15:15:40.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_aeslc_4065329 | 7 | null | transformers | 14,602 | Entry not found |
Aktsvigun/bart-base_aeslc_9478495 | e9fbaffd69bbb1d82df209e2679779a7d0684794 | 2022-07-07T15:40:25.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_aeslc_9478495 | 7 | null | transformers | 14,603 | Entry not found |
Aktsvigun/bart-base_aeslc_4006598 | cd106029c655cf98567719ea640c45c5fb55903c | 2022-07-07T15:23:32.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_aeslc_4006598 | 7 | null | transformers | 14,604 | Entry not found |
dminiotas05/distilbert-base-uncased-finetuned-ft500_6class | 560dc5495b9e11111f2c823f408c09704adb0a2c | 2022-07-07T11:11:18.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | dminiotas05 | null | dminiotas05/distilbert-base-uncased-finetuned-ft500_6class | 7 | null | transformers | 14,605 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-ft500_6class
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft500_6class
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5162
- Accuracy: 0.356
- F1: 0.3347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.579 | 1.0 | 188 | 1.5575 | 0.2933 | 0.2521 |
| 1.4527 | 2.0 | 376 | 1.5043 | 0.3227 | 0.2821 |
| 1.3767 | 3.0 | 564 | 1.4982 | 0.34 | 0.2938 |
| 1.3122 | 4.0 | 752 | 1.4784 | 0.368 | 0.3454 |
| 1.2678 | 5.0 | 940 | 1.5162 | 0.356 | 0.3347 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
cherrypaca/puppies_classify | b54210f63ddf939ba3dc4f39883bef7973d6729c | 2022-07-07T13:25:43.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
]
| image-classification | false | cherrypaca | null | cherrypaca/puppies_classify | 7 | null | transformers | 14,606 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: puppies_classify
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9701492786407471
---
# puppies_classify
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### husky

#### pomeranian
 |
MilaNLProc/hate-ita-xlm-r-base | 2b7a3690840432b375b62edeedf6f861e1133a95 | 2022-07-07T15:32:15.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"it",
"transformers",
"text classification",
"abusive language",
"hate speech",
"offensive language",
"license:mit"
]
| text-classification | false | MilaNLProc | null | MilaNLProc/hate-ita-xlm-r-base | 7 | null | transformers | 14,607 | ---
language: it
license: mit
tags:
- text classification
- abusive language
- hate speech
- offensive language
widget:
- text: "Ci sono dei bellissimi capibara!"
example_title: "Hate Speech Classification 1"
- text: "Sei una testa di cazzo!!"
example_title: "Hate Speech Classification 2"
- text: "Ti odio!"
example_title: "Hate Speech Classification 3"
---
#
[Debora Nozza](http://dnozza.github.io/) •
[Federico Bianchi](https://federicobianchi.io/) •
[Giuseppe Attanasio](https://gattanasio.cc/)
# HATE-ITA Base
HATE-ITA is a binary hate speech classification model for Italian social media text.
<img src="https://raw.githubusercontent.com/MilaNLProc/hate-ita/main/hateita.png?token=GHSAT0AAAAAABTEBAJ4PNDWAMU3KKIGUOCSYWG4IBA" width="200">
## Abstract
Online hate speech is a dangerous phenomenon that can (and should) be promptly counteracted properly. While Natural Language Processing has been successfully used for the purpose, many of the research efforts are directed toward the English language. This choice severely limits the classification power in non-English languages. In this paper, we test several learning frameworks for identifying hate speech in Italian text. We release **HATE-ITA, a set of multi-language models trained on a large set of English data and available Italian datasets**. HATE-ITA performs better than mono-lingual models and seems to adapt well also on language-specific slurs. We believe our findings will encourage research in other mid-to-low resource communities and provide a valuable benchmarking tool for the Italian community.
## Model
This model is the fine-tuned version of the [XLM-RoBERTa-base](https://huggingface.co/xlm-roberta-base) model.
| Model | Download |
| ------ | -------------------------|
| `hate-ita` | [Link](https://huggingface.co/MilaNLProc/hate-ita) |
| `hate-ita-xlm-r-base` | [Link](https://huggingface.co/MilaNLProc/hate-ita-xlm-r-base) |
| `hate-ita-xlm-r-large` | [Link](https://huggingface.co/MilaNLProc/hate-ita-xlm-r-large) |
## Usage
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='MilaNLProc/hate-ita-xlm-r-base',top_k=2)
prediction = classifier("ti odio")
print(prediction)
```
## Citation
Please use the following BibTeX entry if you use this model in your project:
```
@inproceedings{nozza-etal-2022-hate-ita,
title = {{HATE-ITA}: Hate Speech Detection in Italian Social Media Text},
author = "Nozza, Debora and Bianchi, Federico and Attanasio, Giuseppe",
booktitle = "Proceedings of the 6th Workshop on Online Abuse and Harms",
year = "2022",
publisher = "Association for Computational Linguistics"
}
```
## Ethical Statement
While promising, the results in this work should not be interpreted as a definitive assessment of the performance of hate speech detection in Italian. We are unsure if our model can maintain a stable and fair precision across the different targets and categories. HATE-ITA might overlook some sensible details, which practitioners should treat with care. |
MilaNLProc/hate-ita-xlm-r-large | 1e87ce28b1459edb2ab81174c536171a62ff11b9 | 2022-07-07T15:32:42.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"it",
"transformers",
"text classification",
"abusive language",
"hate speech",
"offensive language",
"license:mit"
]
| text-classification | false | MilaNLProc | null | MilaNLProc/hate-ita-xlm-r-large | 7 | null | transformers | 14,608 | ---
language: it
license: mit
tags:
- text classification
- abusive language
- hate speech
- offensive language
widget:
- text: "Ci sono dei bellissimi capibara!"
example_title: "Hate Speech Classification 1"
- text: "Sei una testa di cazzo!!"
example_title: "Hate Speech Classification 2"
- text: "Ti odio!"
example_title: "Hate Speech Classification 3"
---
#
[Debora Nozza](http://dnozza.github.io/) •
[Federico Bianchi](https://federicobianchi.io/) •
[Giuseppe Attanasio](https://gattanasio.cc/)
# HATE-ITA Base
HATE-ITA is a binary hate speech classification model for Italian social media text.
<img src="https://raw.githubusercontent.com/MilaNLProc/hate-ita/main/hateita.png?token=GHSAT0AAAAAABTEBAJ4PNDWAMU3KKIGUOCSYWG4IBA" width="200">
## Abstract
Online hate speech is a dangerous phenomenon that can (and should) be promptly counteracted properly. While Natural Language Processing has been successfully used for the purpose, many of the research efforts are directed toward the English language. This choice severely limits the classification power in non-English languages. In this paper, we test several learning frameworks for identifying hate speech in Italian text. We release **HATE-ITA, a set of multi-language models trained on a large set of English data and available Italian datasets**. HATE-ITA performs better than mono-lingual models and seems to adapt well also on language-specific slurs. We believe our findings will encourage research in other mid-to-low resource communities and provide a valuable benchmarking tool for the Italian community.
## Model
This model is the fine-tuned version of the [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) model.
| Model | Download |
| ------ | -------------------------|
| `hate-ita` | [Link](https://huggingface.co/MilaNLProc/hate-ita) |
| `hate-ita-xlm-r-base` | [Link](https://huggingface.co/MilaNLProc/hate-ita-xlm-r-base) |
| `hate-ita-xlm-r-large` | [Link](https://huggingface.co/MilaNLProc/hate-ita-xlm-r-large) |
## Usage
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='MilaNLProc/hate-ita-xlm-r-large',top_k=2)
prediction = classifier("ti odio")
print(prediction)
```
## Citation
Please use the following BibTeX entry if you use this model in your project:
```
@inproceedings{nozza-etal-2022-hate-ita,
title = {{HATE-ITA}: Hate Speech Detection in Italian Social Media Text},
author = "Nozza, Debora and Bianchi, Federico and Attanasio, Giuseppe",
booktitle = "Proceedings of the 6th Workshop on Online Abuse and Harms",
year = "2022",
publisher = "Association for Computational Linguistics"
}
```
## Ethical Statement
While promising, the results in this work should not be interpreted as a definitive assessment of the performance of hate speech detection in Italian. We are unsure if our model can maintain a stable and fair precision across the different targets and categories. HATE-ITA might overlook some sensible details, which practitioners should treat with care. |
gemasphi/laprador_trained | d192f50685e5e12d72e87b2a0a96a1a3460b12a3 | 2022-07-07T14:25:10.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | gemasphi | null | gemasphi/laprador_trained | 7 | null | sentence-transformers | 14,609 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# gemasphi/laprador_trained
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('gemasphi/laprador_trained')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('gemasphi/laprador_trained')
model = AutoModel.from_pretrained('gemasphi/laprador_trained')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=gemasphi/laprador_trained)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Aayesha/t5-end2end-questions-generation | 54bf3f7394e87fdff070988e954c1c8e14dad195 | 2022-07-09T19:40:26.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:squad_modified_for_t5_qg",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | Aayesha | null | Aayesha/t5-end2end-questions-generation | 7 | null | transformers | 14,610 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_modified_for_t5_qg
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.609 | 0.34 | 100 | 1.9542 |
| 2.0336 | 0.68 | 200 | 1.8015 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
juanna/kogpt2_krpoem | b7349ade6e4073ce772b02afa3f2e57435b5a5f1 | 2022-07-07T16:41:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | juanna | null | juanna/kogpt2_krpoem | 7 | null | transformers | 14,611 | Entry not found |
gemasphi/laprador_untrained | b24c648c171ad4dbce99acbb4edfe380e835057a | 2022-07-07T15:20:10.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | gemasphi | null | gemasphi/laprador_untrained | 7 | null | sentence-transformers | 14,612 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# gemasphi/laprador_untrained
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('gemasphi/laprador_untrained')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('gemasphi/laprador_untrained')
model = AutoModel.from_pretrained('gemasphi/laprador_untrained')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=gemasphi/laprador_untrained)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Mascariddu8/distilbert-base-uncased-finetuned-imdb | 331f600440d88b0d12429bbfa391d79ee285af23 | 2022-07-07T17:47:28.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | Mascariddu8 | null | Mascariddu8/distilbert-base-uncased-finetuned-imdb | 7 | null | transformers | 14,613 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huggingtweets/fairytale_bot23 | 9e2b7a50858808da5e977ad6828b188864fbf50c | 2022-07-07T21:44:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/fairytale_bot23 | 7 | null | transformers | 14,614 | ---
language: en
thumbnail: http://www.huggingtweets.com/fairytale_bot23/1657230245911/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1486954631464771591/cwgDTNXD_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Fairytale Generator</div>
<div style="text-align: center; font-size: 14px;">@fairytale_bot23</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Fairytale Generator.
| Data | Fairytale Generator |
| --- | --- |
| Tweets downloaded | 315 |
| Retweets | 0 |
| Short tweets | 0 |
| Tweets kept | 315 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/lznwr8t9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fairytale_bot23's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2hjhfq1n) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2hjhfq1n/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fairytale_bot23')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
liamliang/demographics_race_v2 | 347f61674ade89e29ea829150b3bfa254089dc06 | 2022-07-07T21:54:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | liamliang | null | liamliang/demographics_race_v2 | 7 | null | transformers | 14,615 | Entry not found |
rahuldebdas79/finetuning-sentiment-model-3000-samples | 6a80d05356ac8d6698b3ee5605a6c11a06c3af1b | 2022-07-18T18:40:24.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | rahuldebdas79 | null | rahuldebdas79/finetuning-sentiment-model-3000-samples | 7 | null | transformers | 14,616 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8684210526315789
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3157
- Accuracy: 0.8667
- F1: 0.8684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Rohit129/NER_multiconer22 | b48510a215ca9f5ca1a7948cbb528bc422c810b1 | 2022-07-08T16:05:14.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Rohit129 | null | Rohit129/NER_multiconer22 | 7 | null | transformers | 14,617 | Entry not found |
jonatasgrosman/exp_w2v2t_fr_wavlm_s766 | d354cd94e485f19db13f4d2b4b50eb2dfa2f0d6d | 2022-07-09T00:37:51.000Z | [
"pytorch",
"wavlm",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
]
| automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_fr_wavlm_s766 | 7 | null | transformers | 14,618 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_wavlm_s766
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
dgrinwald/swin-tiny-patch4-window7-224-finetuned-eurosat | bbd3546d725c1546f19f54dbf06eb6da2e61adb2 | 2022-07-09T20:17:28.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:imagefolder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| image-classification | false | dgrinwald | null | dgrinwald/swin-tiny-patch4-window7-224-finetuned-eurosat | 7 | null | transformers | 14,619 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8464730290456431
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3266
- Accuracy: 0.8465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2941 | 1.0 | 17 | 1.1717 | 0.4689 |
| 1.0655 | 2.0 | 34 | 0.9397 | 0.5560 |
| 0.8008 | 3.0 | 51 | 0.6153 | 0.7303 |
| 0.7204 | 4.0 | 68 | 0.5665 | 0.7427 |
| 0.6931 | 5.0 | 85 | 0.4670 | 0.7801 |
| 0.6277 | 6.0 | 102 | 0.4328 | 0.8465 |
| 0.5689 | 7.0 | 119 | 0.4078 | 0.8174 |
| 0.6103 | 8.0 | 136 | 0.4060 | 0.8091 |
| 0.5501 | 9.0 | 153 | 0.4842 | 0.7884 |
| 0.6018 | 10.0 | 170 | 0.3780 | 0.8423 |
| 0.5668 | 11.0 | 187 | 0.3551 | 0.8631 |
| 0.5192 | 12.0 | 204 | 0.4514 | 0.8216 |
| 0.5133 | 13.0 | 221 | 0.3598 | 0.8174 |
| 0.5753 | 14.0 | 238 | 0.4172 | 0.8091 |
| 0.4833 | 15.0 | 255 | 0.4685 | 0.8050 |
| 0.5546 | 16.0 | 272 | 0.4474 | 0.7842 |
| 0.5179 | 17.0 | 289 | 0.4570 | 0.7884 |
| 0.5017 | 18.0 | 306 | 0.4218 | 0.8050 |
| 0.4808 | 19.0 | 323 | 0.4094 | 0.8050 |
| 0.4708 | 20.0 | 340 | 0.4693 | 0.7759 |
| 0.5033 | 21.0 | 357 | 0.3141 | 0.8672 |
| 0.4859 | 22.0 | 374 | 0.3687 | 0.8257 |
| 0.516 | 23.0 | 391 | 0.3819 | 0.8216 |
| 0.4822 | 24.0 | 408 | 0.3391 | 0.8506 |
| 0.4748 | 25.0 | 425 | 0.3281 | 0.8506 |
| 0.4914 | 26.0 | 442 | 0.3308 | 0.8631 |
| 0.4354 | 27.0 | 459 | 0.3859 | 0.8133 |
| 0.4297 | 28.0 | 476 | 0.3761 | 0.8133 |
| 0.4747 | 29.0 | 493 | 0.2914 | 0.8672 |
| 0.4395 | 30.0 | 510 | 0.3025 | 0.8548 |
| 0.4279 | 31.0 | 527 | 0.3314 | 0.8506 |
| 0.4327 | 32.0 | 544 | 0.4626 | 0.7842 |
| 0.446 | 33.0 | 561 | 0.3499 | 0.8382 |
| 0.4011 | 34.0 | 578 | 0.3408 | 0.8465 |
| 0.4418 | 35.0 | 595 | 0.3159 | 0.8589 |
| 0.484 | 36.0 | 612 | 0.3130 | 0.8548 |
| 0.4119 | 37.0 | 629 | 0.2899 | 0.8589 |
| 0.4453 | 38.0 | 646 | 0.3200 | 0.8465 |
| 0.4074 | 39.0 | 663 | 0.3493 | 0.8465 |
| 0.3937 | 40.0 | 680 | 0.3003 | 0.8672 |
| 0.4222 | 41.0 | 697 | 0.3547 | 0.8299 |
| 0.3922 | 42.0 | 714 | 0.3206 | 0.8589 |
| 0.3973 | 43.0 | 731 | 0.4074 | 0.8133 |
| 0.4118 | 44.0 | 748 | 0.3147 | 0.8589 |
| 0.4088 | 45.0 | 765 | 0.3393 | 0.8506 |
| 0.3635 | 46.0 | 782 | 0.3584 | 0.8257 |
| 0.403 | 47.0 | 799 | 0.3240 | 0.8506 |
| 0.3943 | 48.0 | 816 | 0.3536 | 0.8216 |
| 0.4085 | 49.0 | 833 | 0.3270 | 0.8465 |
| 0.3865 | 50.0 | 850 | 0.3266 | 0.8465 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
marifulhaque/wav2vec2-large-xls-r-300m-turkish-colab | ddba20e6fa28f8d77cabbc643c6d139fb1efac1c | 2022-07-28T03:03:45.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | marifulhaque | null | marifulhaque/wav2vec2-large-xls-r-300m-turkish-colab | 7 | null | transformers | 14,620 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4411
- Wer: 0.3271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.8286 | 3.67 | 400 | 0.6899 | 0.7462 |
| 0.4378 | 7.34 | 800 | 0.4803 | 0.5127 |
| 0.2073 | 11.01 | 1200 | 0.4640 | 0.4584 |
| 0.1386 | 14.68 | 1600 | 0.4355 | 0.4252 |
| 0.1058 | 18.35 | 2000 | 0.4476 | 0.3789 |
| 0.0819 | 22.02 | 2400 | 0.4248 | 0.3543 |
| 0.0666 | 25.69 | 2800 | 0.4276 | 0.3399 |
| 0.0525 | 29.36 | 3200 | 0.4411 | 0.3271 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ryo0634/luke-base-comp-concat-20181220 | 332a027d9fd160450a1bd94be29d67f313f0d9c7 | 2022-07-09T15:47:45.000Z | [
"pytorch",
"luke",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | ryo0634 | null | ryo0634/luke-base-comp-concat-20181220 | 7 | null | transformers | 14,621 | Entry not found |
NAACL2022/spider | 581ed03a9a3b1901466b9f8430c952799153a418 | 2022-07-09T19:11:45.000Z | [
"pytorch",
"dpr",
"arxiv:2112.07708",
"transformers"
]
| null | false | NAACL2022 | null | NAACL2022/spider | 7 | 4 | transformers | 14,622 | # Spider
This is the unsupervised pretrained model discussed in our paper [Learning to Retrieve Passages without Supervision](https://arxiv.org/abs/2112.07708).
## Usage
We used weight sharing for the query encoder and passage encoder, so the same model should be applied for both.
**Note**! We format the passages similar to DPR, i.e. the title and the text are separated by a `[SEP]` token, but token
type ids are all 0-s.
An example usage:
```python
from transformers import AutoTokenizer, DPRContextEncoder
tokenizer = AutoTokenizer.from_pretrained("tau/spider")
model = DPRContextEncoder.from_pretrained("tau/spider")
input_dict = tokenizer("title", "text", return_tensors="pt")
del input_dict["token_type_ids"]
outputs = model(**input_dict)
```
|
jonatasgrosman/exp_w2v2t_fa_wav2vec2_s321 | ebd01c42bd9945ba8971a7809e715080d8cebd0e | 2022-07-09T19:41:14.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
]
| automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_fa_wav2vec2_s321 | 7 | null | transformers | 14,623 | ---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_wav2vec2_s321
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fa_wavlm_s779 | e523fac9049560b545e29ea4b61201d870d50f14 | 2022-07-09T22:40:13.000Z | [
"pytorch",
"wavlm",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
]
| automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_fa_wavlm_s779 | 7 | null | transformers | 14,624 | ---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_wavlm_s779
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
ArnavL/roberta-realnews-agnews-0 | de75299ce01086d731e1976799bc936f8bd25da2 | 2022-07-10T09:10:39.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | ArnavL | null | ArnavL/roberta-realnews-agnews-0 | 7 | null | transformers | 14,625 | Entry not found |
jonatasgrosman/exp_w2v2t_zh-cn_wavlm_s677 | c718301a11d6611002f943bd4ab1421a1b553dfa | 2022-07-10T01:36:46.000Z | [
"pytorch",
"wavlm",
"automatic-speech-recognition",
"zh-CN",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
]
| automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_zh-cn_wavlm_s677 | 7 | null | transformers | 14,626 | ---
language:
- zh-CN
license: apache-2.0
tags:
- automatic-speech-recognition
- zh-CN
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_zh-cn_wavlm_s677
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
faebots/image-gpt2 | 3f4511ade28f0f025a6458c32314ab3fb9edeb5b | 2022-07-16T01:24:29.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | faebots | null | faebots/image-gpt2 | 7 | null | transformers | 14,627 | Entry not found |
ShooterRon/mt5-small_summarization | 3886538695dab70d98547a1d3a0872d2eff6010c | 2022-07-10T15:19:23.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | ShooterRon | null | ShooterRon/mt5-small_summarization | 7 | null | transformers | 14,628 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small_summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small_summarization
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1774
- Rouge1: 18.2118
- Rouge2: 6.6244
- Rougel: 15.4682
- Rougelsum: 15.3942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 17.7253 | 1.0 | 50 | 7.6921 | 6.677 | 1.1111 | 6.5586 | 6.6861 |
| 9.8457 | 2.0 | 100 | 4.5604 | 12.8991 | 1.9103 | 11.2559 | 10.9036 |
| 6.2403 | 3.0 | 150 | 3.9071 | 16.463 | 4.0695 | 14.3098 | 14.4065 |
| 5.2032 | 4.0 | 200 | 3.4869 | 17.6601 | 4.0878 | 14.2931 | 14.2743 |
| 4.8331 | 5.0 | 250 | 3.3472 | 18.5241 | 5.3312 | 15.8993 | 16.0559 |
| 4.526 | 6.0 | 300 | 3.2346 | 19.0264 | 5.7839 | 15.8013 | 16.1208 |
| 4.5378 | 7.0 | 350 | 3.1927 | 18.9843 | 6.992 | 16.3787 | 16.3574 |
| 4.3278 | 8.0 | 400 | 3.1774 | 18.2118 | 6.6244 | 15.4682 | 15.3942 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
harr/my-awesome-model | 9a4ea12b0721439f3746ba0797e9b3d2603b203e | 2022-07-10T13:31:20.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | harr | null | harr/my-awesome-model | 7 | null | transformers | 14,629 | Entry not found |
ryo0634/luke-base-comp-20201201 | 558a7354398d891886c4a2aeafa7890da7ceda99 | 2022-07-11T04:27:06.000Z | [
"pytorch",
"luke",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | ryo0634 | null | ryo0634/luke-base-comp-20201201 | 7 | null | transformers | 14,630 | Entry not found |
jonatasgrosman/exp_w2v2t_nl_vp-it_s449 | e1f77af9f940a26a146e66ab97bd8ef8a011adf8 | 2022-07-11T07:20:08.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
]
| automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_nl_vp-it_s449 | 7 | null | transformers | 14,631 | ---
language:
- nl
license: apache-2.0
tags:
- automatic-speech-recognition
- nl
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_nl_vp-it_s449
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ru_xls-r_s635 | af37c16f405eed4000898da2882cca9734c09a13 | 2022-07-11T09:42:39.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ru",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
]
| automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_ru_xls-r_s635 | 7 | null | transformers | 14,632 | ---
language:
- ru
license: apache-2.0
tags:
- automatic-speech-recognition
- ru
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ru_xls-r_s635
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
rajkumarrrk/gpt-2-fine-tuned-on-cnn-dm | 6924a13bf906ffa1450796940ac40ee92cc87bdd | 2022-07-11T11:36:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:apache-2.0"
]
| text-generation | false | rajkumarrrk | null | rajkumarrrk/gpt-2-fine-tuned-on-cnn-dm | 7 | null | transformers | 14,633 | ---
license: apache-2.0
---
GPT-2 fine-tuned on CNN/DM summarization dataset.
Training args:\
{
"learning_rate": 0.0001\
"logging_steps": 5000\
"lr_scheduler_type": "cosine"\
"num_train_epochs": 2\
"per_device_train_batch_size": 12, # Total batch size: 36\
"weight_decay": 0.1\
}
{"generation_kwargs": {"do_sample": true, "max_new_tokens": 100, "min_length": 50}
Pre-processing to truncate the article to contain only 500 tokens.
Post-processing to consider only first three sentences as the summary.
Test split metrics:
Meteor: 0.2562237219960531\
Rouge1: 0.3754558158439447\
Rouge2: 0.15532626375157227\
RougeL: 0.25813023509572597\
RougeLsum: 0.3489472885043494\
BLEU: 0.09285941365815623\
Bert_score: 0.87570951795246\
|
KeLiu/QETRA_Python | bfbb3b9551746f660a1d0493b2908dca2253a968 | 2022-07-11T14:39:54.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | KeLiu | null | KeLiu/QETRA_Python | 7 | null | transformers | 14,634 | Entry not found |
Sahara/finetuning-sentiment-model-3000-samples | a5adeb94a59f90eae02f94dfead931ff9b139d9a | 2022-07-11T19:23:33.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Sahara | null | Sahara/finetuning-sentiment-model-3000-samples | 7 | null | transformers | 14,635 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8533333333333334
- name: F1
type: f1
value: 0.8562091503267975
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3322
- Accuracy: 0.8533
- F1: 0.8562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
paola-md/recipe-distilbert-s | 8ba0aad03662448d8d2d344522153626c5629816 | 2022-07-12T04:54:03.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | paola-md | null | paola-md/recipe-distilbert-s | 7 | null | transformers | 14,636 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-distilbert-s
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-distilbert-s
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.8594 | 1.0 | 844 | 1.4751 |
| 1.4763 | 2.0 | 1688 | 1.3282 |
| 1.3664 | 3.0 | 2532 | 1.2553 |
| 1.2975 | 4.0 | 3376 | 1.2093 |
| 1.2543 | 5.0 | 4220 | 1.1667 |
| 1.2189 | 6.0 | 5064 | 1.1472 |
| 1.1944 | 7.0 | 5908 | 1.1251 |
| 1.1737 | 8.0 | 6752 | 1.1018 |
| 1.1549 | 9.0 | 7596 | 1.0950 |
| 1.1387 | 10.0 | 8440 | 1.0796 |
| 1.1295 | 11.0 | 9284 | 1.0713 |
| 1.1166 | 12.0 | 10128 | 1.0639 |
| 1.1078 | 13.0 | 10972 | 1.0485 |
| 1.099 | 14.0 | 11816 | 1.0431 |
| 1.0951 | 15.0 | 12660 | 1.0425 |
| 1.0874 | 16.0 | 13504 | 1.0323 |
| 1.0828 | 17.0 | 14348 | 1.0368 |
| 1.0802 | 18.0 | 15192 | 1.0339 |
| 1.0798 | 19.0 | 16036 | 1.0247 |
| 1.0758 | 20.0 | 16880 | 1.0321 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
paola-md/recipe-distilbert-upper-tIs | 8541d61105c8c4b83eceed734745b300ffc1ac5c | 2022-07-12T10:28:07.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | paola-md | null | paola-md/recipe-distilbert-upper-tIs | 7 | null | transformers | 14,637 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-distilbert-upper-tIs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-distilbert-upper-tIs
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.67 | 1.0 | 1353 | 1.2945 |
| 1.2965 | 2.0 | 2706 | 1.1547 |
| 1.1904 | 3.0 | 4059 | 1.0846 |
| 1.1272 | 4.0 | 5412 | 1.0407 |
| 1.0857 | 5.0 | 6765 | 1.0039 |
| 1.0549 | 6.0 | 8118 | 0.9802 |
| 1.03 | 7.0 | 9471 | 0.9660 |
| 1.01 | 8.0 | 10824 | 0.9474 |
| 0.9931 | 9.0 | 12177 | 0.9365 |
| 0.9807 | 10.0 | 13530 | 0.9252 |
| 0.9691 | 11.0 | 14883 | 0.9105 |
| 0.9601 | 12.0 | 16236 | 0.9079 |
| 0.9503 | 13.0 | 17589 | 0.8979 |
| 0.9436 | 14.0 | 18942 | 0.8930 |
| 0.9371 | 15.0 | 20295 | 0.8875 |
| 0.9322 | 16.0 | 21648 | 0.8851 |
| 0.9279 | 17.0 | 23001 | 0.8801 |
| 0.9254 | 18.0 | 24354 | 0.8812 |
| 0.9227 | 19.0 | 25707 | 0.8768 |
| 0.9232 | 20.0 | 27060 | 0.8746 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ppsingh/bert-base-uncased-finetuned-osdg | 610e7ad7a8b52b2e22bce70f4bcbbda732b3b6a0 | 2022-07-12T13:26:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | ppsingh | null | ppsingh/bert-base-uncased-finetuned-osdg | 7 | null | transformers | 14,638 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-osdg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-osdg
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5716
- F1 Score: 0.8359
- Accuracy: 0.8726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 0.6505 | 1.0 | 830 | 0.5662 | 0.8140 | 0.8577 |
| 0.4115 | 2.0 | 1660 | 0.5699 | 0.8249 | 0.8625 |
| 0.2334 | 3.0 | 2490 | 0.5716 | 0.8359 | 0.8726 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huggingtweets/piotrikonowicz1 | 28810c715d8d89dfff8c6a01d4fca17555874fa4 | 2022-07-12T14:00:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/piotrikonowicz1 | 7 | null | transformers | 14,639 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/770622589664460802/bgUHfTNZ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Piotr Ikonowicz</div>
<div style="text-align: center; font-size: 14px;">@piotrikonowicz1</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Piotr Ikonowicz.
| Data | Piotr Ikonowicz |
| --- | --- |
| Tweets downloaded | 133 |
| Retweets | 3 |
| Short tweets | 13 |
| Tweets kept | 117 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/156jwrd1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @piotrikonowicz1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/w029u281) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/w029u281/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/piotrikonowicz1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/reillocity | d6b9c3e8386f5b00906fd63886a9b9c2b0d018e2 | 2022-07-25T06:40:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/reillocity | 7 | null | transformers | 14,640 | ---
language: en
thumbnail: http://www.huggingtweets.com/reillocity/1658731242865/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1268284452586700800/BtFzXFsw_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Matt Collier</div>
<div style="text-align: center; font-size: 14px;">@reillocity</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Matt Collier.
| Data | Matt Collier |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 35 |
| Short tweets | 38 |
| Tweets kept | 3177 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/20sr7og7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @reillocity's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3i5czu5f) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3i5czu5f/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/reillocity')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ariesutiono/scibert-lm-const-finetuned-20 | 17069c17096c3da87ad9ae066e29bd565a1a7ad0 | 2022-07-13T00:15:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | ariesutiono | null | ariesutiono/scibert-lm-const-finetuned-20 | 7 | null | transformers | 14,641 | ---
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: scibert-lm-const-finetuned-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert-lm-const-finetuned-20
This model is a fine-tuned version of [allenai/scibert_scivocab_cased](https://huggingface.co/allenai/scibert_scivocab_cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0099
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6081 | 1.0 | 118 | 2.9156 |
| 2.7954 | 2.0 | 236 | 2.5940 |
| 2.5762 | 3.0 | 354 | 2.5017 |
| 2.4384 | 4.0 | 472 | 2.3923 |
| 2.3391 | 5.0 | 590 | 2.2996 |
| 2.2417 | 6.0 | 708 | 2.3180 |
| 2.2161 | 7.0 | 826 | 2.2336 |
| 2.1918 | 8.0 | 944 | 2.2465 |
| 2.1494 | 9.0 | 1062 | 2.1871 |
| 2.1215 | 10.0 | 1180 | 2.1566 |
| 2.1015 | 11.0 | 1298 | 2.1849 |
| 2.05 | 12.0 | 1416 | 2.1092 |
| 2.0653 | 13.0 | 1534 | 2.2221 |
| 2.0261 | 14.0 | 1652 | 2.1572 |
| 2.0117 | 15.0 | 1770 | 2.1452 |
| 1.9845 | 16.0 | 1888 | 2.1433 |
| 1.9791 | 17.0 | 2006 | 2.1225 |
| 1.9979 | 18.0 | 2124 | 2.0777 |
| 1.9688 | 19.0 | 2242 | 2.1765 |
| 1.9873 | 20.0 | 2360 | 2.0099 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
annahaz/xlm-roberta-base-misogyny-sexism-out-of-sample-test-opt-EN | 0ad3ff538a075813eb8a09e5773d1d579f3514fe | 2022-07-13T01:19:02.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | annahaz | null | annahaz/xlm-roberta-base-misogyny-sexism-out-of-sample-test-opt-EN | 7 | null | transformers | 14,642 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlm-roberta-base-misogyny-sexism-out-of-sample-test-opt-EN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-misogyny-sexism-out-of-sample-test-opt-EN
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7564
- Accuracy: 0.8640
- F1: 0.6845
- Precision: 0.5877
- Recall: 0.8197
- Mae: 0.1360
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.3793 | 1.0 | 2395 | 0.3475 | 0.8460 | 0.6309 | 0.5550 | 0.7309 | 0.1540 |
| 0.3471 | 2.0 | 4790 | 0.3255 | 0.8580 | 0.6526 | 0.5830 | 0.7411 | 0.1420 |
| 0.3075 | 3.0 | 7185 | 0.3426 | 0.8379 | 0.6451 | 0.5324 | 0.8183 | 0.1621 |
| 0.2634 | 4.0 | 9580 | 0.3034 | 0.8856 | 0.7112 | 0.6521 | 0.7821 | 0.1144 |
| 0.2439 | 5.0 | 11975 | 0.4210 | 0.8656 | 0.6844 | 0.5928 | 0.8094 | 0.1344 |
| 0.2212 | 6.0 | 14370 | 0.5260 | 0.8698 | 0.6904 | 0.6035 | 0.8067 | 0.1302 |
| 0.1855 | 7.0 | 16765 | 0.5626 | 0.8739 | 0.6967 | 0.6146 | 0.8040 | 0.1261 |
| 0.1666 | 8.0 | 19160 | 0.6727 | 0.8647 | 0.6834 | 0.5905 | 0.8108 | 0.1353 |
| 0.147 | 9.0 | 21555 | 0.6287 | 0.8743 | 0.6962 | 0.6163 | 0.7999 | 0.1257 |
| 0.1367 | 10.0 | 23950 | 0.7564 | 0.8640 | 0.6845 | 0.5877 | 0.8197 | 0.1360 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
srini98/distilbert-base-uncased-finetuned-clinic | 2bf5cab19281cc5ab1c501b7cd4c160814b5b05e | 2022-07-13T04:12:40.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | srini98 | null | srini98/distilbert-base-uncased-finetuned-clinic | 7 | null | transformers | 14,643 | Entry not found |
abx/bert-finetuned-ner | 65ae06c307fa4884db98d655e59365427c88136f | 2022-07-13T06:15:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | abx | null | abx/bert-finetuned-ner | 7 | null | transformers | 14,644 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9341713529606351
- name: Recall
type: recall
value: 0.9505217098619994
- name: F1
type: f1
value: 0.9422756089422756
- name: Accuracy
type: accuracy
value: 0.9861070230176017
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0623
- Precision: 0.9342
- Recall: 0.9505
- F1: 0.9423
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0865 | 1.0 | 1756 | 0.0667 | 0.9166 | 0.9379 | 0.9271 | 0.9829 |
| 0.0397 | 2.0 | 3512 | 0.0560 | 0.9337 | 0.9522 | 0.9428 | 0.9867 |
| 0.0194 | 3.0 | 5268 | 0.0623 | 0.9342 | 0.9505 | 0.9423 | 0.9861 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu116
- Datasets 2.3.2
- Tokenizers 0.12.1
|
morenolq/thext-ai-scibert | b4de7e5ac8d4e99875537582336843177a1863f2 | 2022-07-13T17:01:04.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"transformers",
"regression"
]
| text-classification | false | morenolq | null | morenolq/thext-ai-scibert | 7 | null | transformers | 14,645 | ---
language: "en"
tags:
- bert
- regression
- pytorch
pipeline:
- text-classification
widget:
- text: "We propose a new approach, based on Transformer-based encoding, to highlight extraction. To the best of our knowledge, this is the first attempt to use transformer architectures to address automatic highlight generation. [SEP] Highlights are short sentences used to annotate scientific papers. They complement the abstract content by conveying the main result findings. To automate the process of paper annotation, highlights extraction aims at extracting from 3 to 5 paper sentences via supervised learning. Existing approaches rely on ad hoc linguistic features, which depend on the analyzed context, and apply recurrent neural networks, which are not effective in learning long-range text dependencies. This paper leverages the attention mechanism adopted in transformer models to improve the accuracy of sentence relevance estimation. Unlike existing approaches, it relies on the end-to-end training of a deep regression model. To attend patterns relevant to highlights content it also enriches sentence encodings with a section-level contextualization. The experimental results, achieved on three different benchmark datasets, show that the designed architecture is able to achieve significant performance improvements compared to the state-of-the-art."
- text: "We design a context-aware sentence-level regressor, in which the semantic similarity between candidate sentences and highlights is estimated by also attending the contextual knowledge provided by the other paper sections. [SEP] Highlights are short sentences used to annotate scientific papers. They complement the abstract content by conveying the main result findings. To automate the process of paper annotation, highlights extraction aims at extracting from 3 to 5 paper sentences via supervised learning. Existing approaches rely on ad hoc linguistic features, which depend on the analyzed context, and apply recurrent neural networks, which are not effective in learning long-range text dependencies. This paper leverages the attention mechanism adopted in transformer models to improve the accuracy of sentence relevance estimation. Unlike existing approaches, it relies on the end-to-end training of a deep regression model. To attend patterns relevant to highlights content it also enriches sentence encodings with a section-level contextualization. The experimental results, achieved on three different benchmark datasets, show that the designed architecture is able to achieve significant performance improvements compared to the state-of-the-art."
- text: "Fig. 2, Fig. 3, Fig. 4 show the effect of varying the number K of selected highlights on the extraction performance. As expected, recall values increase while increasing the number of selected highlights, whereas precision values show an opposite trend. [SEP] Highlights are short sentences used to annotate scientific papers. They complement the abstract content by conveying the main result findings. To automate the process of paper annotation, highlights extraction aims at extracting from 3 to 5 paper sentences via supervised learning. Existing approaches rely on ad hoc linguistic features, which depend on the analyzed context, and apply recurrent neural networks, which are not effective in learning long-range text dependencies. This paper leverages the attention mechanism adopted in transformer models to improve the accuracy of sentence relevance estimation. Unlike existing approaches, it relies on the end-to-end training of a deep regression model. To attend patterns relevant to highlights content it also enriches sentence encodings with a section-level contextualization. The experimental results, achieved on three different benchmark datasets, show that the designed architecture is able to achieve significant performance improvements compared to the state-of-the-art."
---
# General Information
This model is trained on journal publications of belonging to the domain: **Artificial Intelligence**.
This is an `allenai/scibert_scivocab_cased` model trained in the scientific domain. The model is trained with regression objective to estimate the relevance of a sentence according to the provided context (e.g., the abstract of the scientific paper).
The model is used in the paper 'Transformer-based highlights extraction from scientific papers' published in Knowledge-Based Systems scientific journal.
The model is able to achieve state-of-the-art performance in the task of highlights extraction from scientific papers.
Access to the full paper: [here](https://doi.org/10.1016/j.knosys.2022.109382).
# Usage:
For detailed usage please use the official repository https://github.com/MorenoLaQuatra/THExt .
# References:
If you find it useful, please cite the following paper:
```bibtex
@article{thext,
title={Transformer-based highlights extraction from scientific papers},
author={La Quatra, Moreno and Cagliero, Luca},
journal={Knowledge-Based Systems},
pages={109382},
year={2022},
publisher={Elsevier}
}
``` |
Hamzaaa/wav2vec2-base-finetuned-crema | 7a523dc420d780d1253cb977e612763ec65bab19 | 2022-07-13T14:13:01.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers"
]
| audio-classification | false | Hamzaaa | null | Hamzaaa/wav2vec2-base-finetuned-crema | 7 | null | transformers | 14,646 | Entry not found |
jpalojarvi/finetuning-sentiment-model-3000-samples | 1c77659b26f187a95f2311746349e4cb6d669b12 | 2022-07-13T14:48:18.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | jpalojarvi | null | jpalojarvi/finetuning-sentiment-model-3000-samples | 7 | null | transformers | 14,647 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.86
- name: F1
type: f1
value: 0.8590604026845637
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3239
- Accuracy: 0.86
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
NinaXiao/distilroberta-base-finetuned-wikitext2 | dd8dcfb640c906f7bdaad574eb6c335d8c7fd72a | 2022-07-14T07:02:45.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | NinaXiao | null | NinaXiao/distilroberta-base-finetuned-wikitext2 | 7 | null | transformers | 14,648 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 285 | 2.0524 |
| 2.2183 | 2.0 | 570 | 1.9742 |
| 2.2183 | 3.0 | 855 | 1.9947 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ghadeermobasher/Modifiedbluebert_pubmed_uncased_L-12_H-768_A-12-BioRED-Dis-128-32-30 | 491bf01437f110163d8c05b72866952422549f08 | 2022-07-13T18:16:19.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Modifiedbluebert_pubmed_uncased_L-12_H-768_A-12-BioRED-Dis-128-32-30 | 7 | null | transformers | 14,649 | Entry not found |
ghadeermobasher/OriginalBiomedNLP-bluebert_pubmed_uncased_L-12_H-768_A-12-BioRED_Dis-128-32-30 | 270aafc0b45efe6f7d6899ce88d9a4e1f7891929 | 2022-07-13T19:59:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/OriginalBiomedNLP-bluebert_pubmed_uncased_L-12_H-768_A-12-BioRED_Dis-128-32-30 | 7 | null | transformers | 14,650 | Entry not found |
ghadeermobasher/Originalbiobert-v1.1-BioRED-CD-256-16-5 | 9b3e0ff0023589f850ba00e479101937cdf08831 | 2022-07-13T19:49:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Originalbiobert-v1.1-BioRED-CD-256-16-5 | 7 | null | transformers | 14,651 | Entry not found |
jslowik/distilbert-base-uncased-finetuned-emotion | 40e303b070eee4daefeee9141761f28fd37b2471 | 2022-07-14T15:05:25.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | jslowik | null | jslowik/distilbert-base-uncased-finetuned-emotion | 7 | null | transformers | 14,652 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9262423473736914
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2156
- Accuracy: 0.9265
- F1: 0.9262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.814 | 1.0 | 250 | 0.3075 | 0.907 | 0.9048 |
| 0.2481 | 2.0 | 500 | 0.2156 | 0.9265 | 0.9262 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
eltonpan/codeparrot-ds-2 | be6f8fb26d27f6d70916362902c924143ecf9bd8 | 2022-07-15T07:31:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | eltonpan | null | eltonpan/codeparrot-ds-2 | 7 | null | transformers | 14,653 | Entry not found |
KyleYu1054/sound_classification_hubert | a578716ebfa8b52ee9edc3c2c8cdd13f953c18aa | 2022-07-14T23:46:17.000Z | [
"pytorch",
"hubert",
"transformers"
]
| null | false | KyleYu1054 | null | KyleYu1054/sound_classification_hubert | 7 | null | transformers | 14,654 | Entry not found |
Sayan01/tiny-bert-qnli128-distilled | 5200b747c21521f583ef032b2a9308029adadfbc | 2022-07-15T07:26:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Sayan01 | null | Sayan01/tiny-bert-qnli128-distilled | 7 | null | transformers | 14,655 | Entry not found |
Ankhitan/segformer-b0-finetuned-segments-sidewalk-11 | e8722a68344636719b08f21629247b38b7d2faea | 2022-07-15T21:08:13.000Z | [
"pytorch",
"segformer",
"transformers"
]
| null | false | Ankhitan | null | Ankhitan/segformer-b0-finetuned-segments-sidewalk-11 | 7 | null | transformers | 14,656 | Entry not found |
Hadjer/distilbert-base-uncased-finetuned-squad | 1a080ce87039254c72734c400c84fecca0ec2a61 | 2022-07-16T09:47:27.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | Hadjer | null | Hadjer/distilbert-base-uncased-finetuned-squad | 7 | null | transformers | 14,657 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.1564
- eval_runtime: 147.0781
- eval_samples_per_second: 73.322
- eval_steps_per_second: 4.583
- epoch: 1.0
- step: 5533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ClassCat/roberta-small-basque | b9da71ab553c597e8028fc455f3f0fca6f7f72dc | 2022-07-19T13:04:27.000Z | [
"pytorch",
"roberta",
"fill-mask",
"eu",
"dataset:cc100",
"dataset:oscar",
"transformers",
"license:cc-by-sa-4.0",
"autotrain_compatible"
]
| fill-mask | false | ClassCat | null | ClassCat/roberta-small-basque | 7 | 1 | transformers | 14,658 | ---
language: eu
license: cc-by-sa-4.0
datasets:
- cc100
- oscar
widget:
- text: "Euria egingo <mask> gaur ?"
- text: "<mask> umeari liburua eman dio."
- text: "Zein da zure <mask> ?"
---
## RoBERTa Basque small model (Uncased)
### Prerequisites
transformers==4.19.2
### Model architecture
This model uses approximately half the size of RoBERTa base model parameters.
### Tokenizer
Using BPE tokenizer with vocabulary size 50,000.
### Training Data
* Subset of [CC-100/eu](https://data.statmt.org/cc-100/) : Monolingual Datasets from Web Crawl Data
* Subset of [oscar](https://huggingface.co/datasets/oscar)
### Usage
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ClassCat/roberta-small-basque')
unmasker("Zein da zure <mask> ?")
``` |
ClassCat/gpt2-small-basque-v2 | a1382909ca2e4b10ba4325d93bdfce54a44d7104 | 2022-07-20T12:38:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"eu",
"dataset:cc100",
"dataset:oscar",
"transformers",
"license:cc-by-sa-4.0"
]
| text-generation | false | ClassCat | null | ClassCat/gpt2-small-basque-v2 | 7 | 1 | transformers | 14,659 | ---
language: eu
license: cc-by-sa-4.0
datasets:
- cc100
- oscar
widget:
- text: "Zein da zure"
- text: "Euria egingo"
- text: "Nola dakizu ?"
---
## GPT2 Basque small model Version 2 (Uncased)
### Prerequisites
transformers==4.19.2
### Model architecture
This model uses approximately half the size of GPT2 base model parameters.
### Tokenizer
Using BPE tokenizer with vocabulary size 50,000.
### Training Data
* Subset of [CC-100/eu](https://data.statmt.org/cc-100/) : Monolingual Datasets from Web Crawl Data
* Subset of [oscar](https://huggingface.co/datasets/oscar)
### Usage
```python
from transformers import pipeline
generator = pipeline('text-generation', model='ClassCat/gpt2-small-basque-v2')
generator("Zein da zure ", max_length=50, num_return_sequences=5)
``` |
tanfiona/unicausal-pair-baseline | 0f5275781d846ea154b938d86fcb4c9d060a397d | 2022-07-17T07:17:09.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"transformers",
"license:unknown"
]
| text-classification | false | tanfiona | null | tanfiona/unicausal-pair-baseline | 7 | null | transformers | 14,660 | ---
language: en
license: unknown
widget:
- text: "<ARG1>She fell</ARG1> because <ARG0>he pushed her</ARG0> ."
example_title: "Causal Example 1"
- text: "<ARG0>He pushed her</ARG0> , <ARG1>causing her to fall</ARG1>."
example_title: "Causal Example 2"
- text: "<ARG0>She fell</ARG0> because <ARG1>he pushed her</ARG1> ."
example_title: "Non-causal Example 1"
- text: "<ARG1>He is Billy</ARG1> and <ARG0>he pushed her</ARG0>."
example_title: "Non-causal Example 2"
---
Binary causal sentence classification with argument prompts:
* LABEL_0 = Non-causal
* LABEL_1 = Causal (ARG0 causes ARG1)
Trained on multiple datasets.
For Causal sequences, try swapping the arguments to observe the prediction results. |
ranrinat/distilbert-base-uncased-finetuned-emotion | 192df21ed8bd263daa77d0f5f11ae3c80c3e8131 | 2022-07-17T14:28:45.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | ranrinat | null | ranrinat/distilbert-base-uncased-finetuned-emotion | 7 | null | transformers | 14,661 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9246080819022496
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2158
- Accuracy: 0.9245
- F1: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8152 | 1.0 | 250 | 0.2994 | 0.9095 | 0.9072 |
| 0.2424 | 2.0 | 500 | 0.2158 | 0.9245 | 0.9246 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jinwooChoi/KDW_SA_mix_16_1e5 | 7cca1e7c276b12c76eb24f41da531185494d4374 | 2022-07-19T07:11:19.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/KDW_SA_mix_16_1e5 | 7 | null | transformers | 14,662 | Entry not found |
shivarama23/swin_4epoch | 5d5ec4b2dc9fb37382e848c8f7f05172c520ee25 | 2022-07-18T09:39:49.000Z | [
"pytorch",
"swin",
"image-classification",
"transformers"
]
| image-classification | false | shivarama23 | null | shivarama23/swin_4epoch | 7 | null | transformers | 14,663 | Entry not found |
claudiovaliense/teste_claudio4 | b0cd4b5624302a9bb870abd53dc00757add166ae | 2022-07-18T15:34:12.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | claudiovaliense | null | claudiovaliense/teste_claudio4 | 7 | null | transformers | 14,664 | Entry not found |
doya/klue-sentiment-everybodyscorpus | f5acecf1d9c3234d79bde528efbebbcf3c53025f | 2022-07-18T16:09:13.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | doya | null | doya/klue-sentiment-everybodyscorpus | 7 | null | transformers | 14,665 | Entry not found |
pnr-svc/DistilBert-Sentiment-Analysis-Turkish | b47d1fdafba58fb9f87aea6f3c16bd00d21bd11c | 2022-07-18T18:38:46.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | pnr-svc | null | pnr-svc/DistilBert-Sentiment-Analysis-Turkish | 7 | null | transformers | 14,666 | Entry not found |
vencortexTeam/autotrain-CompanyDescription-1149642380 | 03c663ae361b9223c8b80ee9b77ff91fd6085fdf | 2022-07-19T15:24:12.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:vencortexTeam/autotrain-data-CompanyDescription",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | vencortexTeam | null | vencortexTeam/autotrain-CompanyDescription-1149642380 | 7 | null | transformers | 14,667 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- vencortexTeam/autotrain-data-CompanyDescription
co2_eq_emissions: 4.803822525731932
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1149642380
- CO2 Emissions (in grams): 4.803822525731932
## Validation Metrics
- Loss: 1.1474181413650513
- Rouge1: 57.8827
- Rouge2: 46.6881
- RougeL: 56.4209
- RougeLsum: 56.4665
- Gen Len: 18.0731
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/vencortexTeam/autotrain-CompanyDescription-1149642380
``` |
jinwooChoi/KDW_SA_base_mix_48_1e5 | 6eb30586a4e5a83467b108baa3c298ada5bea40c | 2022-07-19T07:05:25.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/KDW_SA_base_mix_48_1e5 | 7 | null | transformers | 14,668 | Entry not found |
roscazo/Covid-conv-v1 | 14114c19a213403584c5b2cd1c875353bc172f38 | 2022-07-19T21:03:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | roscazo | null | roscazo/Covid-conv-v1 | 7 | null | transformers | 14,669 | Entry not found |
abdulmatinomotoso/multi_news_headline | 2d70633f769a6cdc8f90da32ed39d318ef531e8d | 2022-07-19T23:50:20.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | abdulmatinomotoso | null | abdulmatinomotoso/multi_news_headline | 7 | null | transformers | 14,670 | ---
tags:
- generated_from_trainer
model-index:
- name: multi_news_headline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multi_news_headline
This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.0830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.3316 | 0.53 | 100 | 7.0830 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Kwaku/gpt2-finetuned-banking77 | a372a4dc9604ac5b9f4c3f402b297eeadd8adbf5 | 2022-07-21T20:21:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"eng",
"dataset:banking77",
"transformers"
]
| text-generation | false | Kwaku | null | Kwaku/gpt2-finetuned-banking77 | 7 | null | transformers | 14,671 | ---
language: eng
datasets:
- banking77
---
# GPT2 Fine-Tuned Banking 77
This is a fine-tuned version of the GPT2 model. It's best suited for text-generation.
## Model Description
gpt2-finetuned-ko was fine tuned on the [banking77](https://huggingface.co/datasets/banking77) dataset, which is "composed of online banking queries annotated with their corresponding intents."
## Intended Uses and Limitations
Given the magnitude of the [Microsoft DialoGPT-large](https://huggingface.co/microsoft/DialoGPT-large) model, the author resorted to fine-tuning the gpt2 model for the creation of a chatbot. The intent was for the chatbot to emulate a banking customer agent, hence the use of the banking77 dataset. However, when the fine-tuned model was deployed in the chatbot, the results were undesirable. Its responses were inappropriate and unnecessarily long. The last word of its response is repeated numerously, a major glitch in it. The model performs better in text-generation but is prone to generating banking-related text because of the corpus it was trained on.
### How to use
You can use this model directly with a pipeline for text generation:
```python
>>>from transformers import pipeline
>>> model_name = "Kwaku/gpt2-finetuned-ko"
>>> generator = pipeline("text-generation", model=model_name)
>>> result = generator("My money is", max_length=15, num_return_sequences=2)
>>> print(result)
[{'generated_text': 'My money is stuck in ATM pending. Please cancel this transaction and refund it'}, {'generated_text': 'My money is missing. How do I get a second card, and how'}]
```
### Limitations and bias
For users who want a diverse text-generator, this model's tendency to generate mostly bank-related text will be a drawback. It also inherits [the biases of its parent model, the GPT2](https://huggingface.co/gpt2#limitations-and-bias).
|
ChuVN/bart-base-finetuned-squad2-finetuned-squad2 | 30819a65fff25c680f27441b5479625fe7720264 | 2022-07-21T14:43:51.000Z | [
"pytorch",
"bart",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | ChuVN | null | ChuVN/bart-base-finetuned-squad2-finetuned-squad2 | 7 | null | transformers | 14,672 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bart-base-finetuned-squad2-finetuned-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-squad2-finetuned-squad2
This model is a fine-tuned version of [ChuVN/bart-base-finetuned-squad2](https://huggingface.co/ChuVN/bart-base-finetuned-squad2) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
lqdisme/distilbert-base-uncased-finetuned-squad | ca7bf280a6417dac710d47198c35128e7a395a1b | 2022-07-20T08:03:52.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | lqdisme | null | lqdisme/distilbert-base-uncased-finetuned-squad | 7 | null | transformers | 14,673 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Isma/test | 2912f913a8c0ddf6a6ac930e501438cd677affba | 2022-07-20T04:41:05.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"transformers"
]
| feature-extraction | false | Isma | null | Isma/test | 7 | null | transformers | 14,674 | Entry not found |
muibk/mirrorbert_mbert_sent_unsup_en_de_ru_10k_mean | de9aad80d99295b0c5e8a180eab4e1e229461ab2 | 2022-07-20T13:40:13.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | muibk | null | muibk/mirrorbert_mbert_sent_unsup_en_de_ru_10k_mean | 7 | null | transformers | 14,675 | Entry not found |
finiteautomata/legal-definition-ner | 531084a64a6751712c1cb1fa1cdd64bec6e77d33 | 2022-07-20T14:47:12.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | finiteautomata | null | finiteautomata/legal-definition-ner | 7 | null | transformers | 14,676 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: legal-definition-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal-definition-ner
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3580
- eval_precision: 0.3777
- eval_recall: 0.4355
- eval_macro_f1: 0.1845
- eval_micro_f1: 0.4046
- eval_accuracy: 0.8817
- eval_Alias_Term_f1: 0.125
- eval_Alias_Term_precision: 0.1786
- eval_Alias_Term_recall: 0.0962
- eval_Definition_f1: 0.1631
- eval_Definition_precision: 0.1424
- eval_Definition_recall: 0.1908
- eval_Qualifier_f1: 0.0
- eval_Qualifier_precision: 0.0
- eval_Qualifier_recall: 0.0
- eval_Referential_Definition_f1: 0.0
- eval_Referential_Definition_precision: 0.0
- eval_Referential_Definition_recall: 0.0
- eval_Referential_Term_f1: 0.0
- eval_Referential_Term_precision: 0.0
- eval_Referential_Term_recall: 0.0
- eval_Secondary_Definition_f1: 0.0275
- eval_Secondary_Definition_precision: 0.0343
- eval_Secondary_Definition_recall: 0.0229
- eval_Term_f1: 0.9757
- eval_Term_precision: 0.9567
- eval_Term_recall: 0.9955
- eval_runtime: 33.3159
- eval_samples_per_second: 166.647
- eval_steps_per_second: 10.415
- epoch: 3.45
- step: 1616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
haisona3/longformer-base-4096-finetuned-squad2-length-1024-128window | de903aba6a1a33dc172cbda2502a0f5d75406d0a | 2022-07-20T16:34:46.000Z | [
"pytorch",
"tensorboard",
"longformer",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | haisona3 | null | haisona3/longformer-base-4096-finetuned-squad2-length-1024-128window | 7 | null | transformers | 14,677 | ---
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: longformer-base-4096-finetuned-squad2-length-1024-128window
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer-base-4096-finetuned-squad2-length-1024-128window
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ChuVN/longformer-base-4096-finetuned-squad2-length-1024-128window | 2f16686585f2b296c35d7183749229573922ba99 | 2022-07-20T23:21:07.000Z | [
"pytorch",
"tensorboard",
"longformer",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | ChuVN | null | ChuVN/longformer-base-4096-finetuned-squad2-length-1024-128window | 7 | null | transformers | 14,678 | ---
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: longformer-base-4096-finetuned-squad2-length-1024-128window
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer-base-4096-finetuned-squad2-length-1024-128window
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8641 | 1.0 | 32580 | 0.9057 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Ahmed007/T5-ibn-Shaddad-v2 | 21ebed26868085a7d181cb56a35b57cde38a2fdb | 2022-07-21T02:22:41.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | Ahmed007 | null | Ahmed007/T5-ibn-Shaddad-v2 | 7 | null | transformers | 14,679 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: T5-ibn-Shaddad-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-ibn-Shaddad-v2
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1159
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1234 | 1.0 | 2493 | 0.1159 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Ahmed007/mt5-small-ibn-Shaddad-v3 | 5248ac233c96e9ffa11989e6acd3722d0c73f5f1 | 2022-07-21T03:47:51.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"Poet",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | Ahmed007 | null | Ahmed007/mt5-small-ibn-Shaddad-v3 | 7 | null | transformers | 14,680 | ---
license: apache-2.0
tags:
- Poet
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-ibn-Shaddad-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-ibn-Shaddad-v3
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2668
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 5.4157 | 1.0 | 935 | 3.2668 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
farleyknight/distilbert-base-uncased-finetuned-cola | 664896037c348df35853243829cc1922088c14b2 | 2022-07-21T12:38:30.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | farleyknight | null | farleyknight/distilbert-base-uncased-finetuned-cola | 7 | null | transformers | 14,681 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5491920151313351
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8288
- Matthews Correlation: 0.5492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5294 | 1.0 | 535 | 0.5478 | 0.3803 |
| 0.3519 | 2.0 | 1070 | 0.5429 | 0.4830 |
| 0.2375 | 3.0 | 1605 | 0.5676 | 0.5298 |
| 0.1783 | 4.0 | 2140 | 0.7776 | 0.5338 |
| 0.1294 | 5.0 | 2675 | 0.8288 | 0.5492 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
arize-ai/resnet-50-cifar10-quality-drift | a966167ee856646ed878293729386c429920d96b | 2022-07-21T23:55:46.000Z | [
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"dataset:cifar10_quality_drift",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| image-classification | false | arize-ai | null | arize-ai/resnet-50-cifar10-quality-drift | 7 | null | transformers | 14,682 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cifar10_quality_drift
metrics:
- accuracy
- f1
model-index:
- name: resnet-50-cifar10-quality-drift
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: cifar10_quality_drift
type: cifar10_quality_drift
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.724
- name: F1
type: f1
value: 0.7221970011456912
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-cifar10-quality-drift
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the cifar10_quality_drift dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8235
- Accuracy: 0.724
- F1: 0.7222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.7311 | 1.0 | 750 | 1.1310 | 0.6333 | 0.6300 |
| 1.1728 | 2.0 | 1500 | 0.8495 | 0.7153 | 0.7155 |
| 1.0322 | 3.0 | 2250 | 0.8235 | 0.724 | 0.7222 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jinwooChoi/SKKU_SA_HJW_0722_2 | db751ee8756cc5fbf98b5efc6ef8baab4d956c3d | 2022-07-22T06:24:47.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/SKKU_SA_HJW_0722_2 | 7 | null | transformers | 14,683 | Entry not found |
jinwooChoi/SKKU_SA_HJW_0722 | 1112a9f62e831db6963c3ac4f9773a8b64836a03 | 2022-07-22T07:15:52.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/SKKU_SA_HJW_0722 | 7 | null | transformers | 14,684 | Entry not found |
huggingtweets/thenextweb | 53656bf4c311c80b08a85c2a2ed13b2a89b04fe9 | 2022-07-22T10:35:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/thenextweb | 7 | null | transformers | 14,685 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1306571874000830464/AZtkNMd-_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">TNW</div>
<div style="text-align: center; font-size: 14px;">@thenextweb</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from TNW.
| Data | TNW |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 39 |
| Short tweets | 44 |
| Tweets kept | 3167 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3egcwo6t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @thenextweb's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1s2bu9ha) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1s2bu9ha/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/thenextweb')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ameerazam08/autotrain-imdb-1166543171 | f30e49c3c0ee48c115e15a2798e9f3a6daad6559 | 2022-07-22T11:56:54.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:ameerazam08/autotrain-data-imdb",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | ameerazam08 | null | ameerazam08/autotrain-imdb-1166543171 | 7 | null | transformers | 14,686 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ameerazam08/autotrain-data-imdb
co2_eq_emissions: 0.07308302140406821
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1166543171
- CO2 Emissions (in grams): 0.07308302140406821
## Validation Metrics
- Loss: 0.2211569994688034
- Accuracy: 0.9138
- Precision: 0.9020598523124758
- Recall: 0.9284
- AUC: 0.9711116000000001
- F1: 0.9150404100137985
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ameerazam08/autotrain-imdb-1166543171
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ameerazam08/autotrain-imdb-1166543171", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ameerazam08/autotrain-imdb-1166543171", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
huggingtweets/leadermcconnell | c0af7679cfa7fbb83e71b8edf7f10c4d21dd7fe5 | 2022-07-22T22:07:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/leadermcconnell | 7 | null | transformers | 14,687 | ---
language: en
thumbnail: http://www.huggingtweets.com/leadermcconnell/1658527665443/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/732596482336002049/JYMrr9_4_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Leader McConnell</div>
<div style="text-align: center; font-size: 14px;">@leadermcconnell</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Leader McConnell.
| Data | Leader McConnell |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 151 |
| Short tweets | 20 |
| Tweets kept | 3074 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2sz9pqeo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @leadermcconnell's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/sxm633o0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/sxm633o0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/leadermcconnell')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sanskar/DepressionAnalysis | 94d1632c446bbce88ee4edb001f139a94bc87eb2 | 2022-07-23T19:50:11.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | sanskar | null | sanskar/DepressionAnalysis | 7 | null | transformers | 14,688 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: DepressionAnalysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DepressionAnalysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4023
- Accuracy: 0.8367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6091 | 1.0 | 151 | 0.5593 | 0.7082 |
| 0.4041 | 2.0 | 302 | 0.4295 | 0.8055 |
| 0.3057 | 3.0 | 453 | 0.4023 | 0.8367 |
| 0.1921 | 4.0 | 604 | 0.4049 | 0.8454 |
| 0.1057 | 5.0 | 755 | 0.4753 | 0.8479 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huggingtweets/luciengreaves-seanhannity | 50740296f0493681f1876771135d400346daba14 | 2022-07-22T22:49:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/luciengreaves-seanhannity | 7 | null | transformers | 14,689 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/666311094256971779/rhb7qkCD_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1402771730582622212/gwApDT26_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lucien Greaves & Sean Hannity</div>
<div style="text-align: center; font-size: 14px;">@luciengreaves-seanhannity</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lucien Greaves & Sean Hannity.
| Data | Lucien Greaves | Sean Hannity |
| --- | --- | --- |
| Tweets downloaded | 3197 | 3250 |
| Retweets | 536 | 13 |
| Short tweets | 379 | 60 |
| Tweets kept | 2282 | 3177 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2iwc0kes/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @luciengreaves-seanhannity's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2db4oami) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2db4oami/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/luciengreaves-seanhannity')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Siyong/M | 5e765d08165691176284f1b86bd5958e69a96f16 | 2022-07-23T10:51:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Siyong | null | Siyong/M | 7 | null | transformers | 14,690 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Millad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Millad
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2265
- Wer: 0.5465
- Cer: 0.3162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- num_epochs: 750
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:-----:|:---------------:|:------:|:------:|
| 3.2911 | 33.9 | 2000 | 2.2097 | 0.9963 | 0.6047 |
| 1.3419 | 67.8 | 4000 | 1.9042 | 0.7007 | 0.3565 |
| 0.6542 | 101.69 | 6000 | 1.7195 | 0.5985 | 0.3194 |
| 0.373 | 135.59 | 8000 | 2.2219 | 0.6078 | 0.3241 |
| 0.2805 | 169.49 | 10000 | 2.3114 | 0.6320 | 0.3304 |
| 0.2014 | 203.39 | 12000 | 2.6898 | 0.6338 | 0.3597 |
| 0.1611 | 237.29 | 14000 | 2.7808 | 0.6041 | 0.3379 |
| 0.1265 | 271.19 | 16000 | 2.8304 | 0.5632 | 0.3289 |
| 0.1082 | 305.08 | 18000 | 2.8373 | 0.5874 | 0.3344 |
| 0.103 | 338.98 | 20000 | 2.8580 | 0.5743 | 0.3292 |
| 0.0854 | 372.88 | 22000 | 2.5413 | 0.5539 | 0.3186 |
| 0.0675 | 406.78 | 24000 | 2.5523 | 0.5502 | 0.3229 |
| 0.0531 | 440.68 | 26000 | 2.9369 | 0.5483 | 0.3142 |
| 0.0504 | 474.58 | 28000 | 3.1416 | 0.5595 | 0.3225 |
| 0.0388 | 508.47 | 30000 | 2.5655 | 0.5390 | 0.3111 |
| 0.0396 | 542.37 | 32000 | 3.1923 | 0.5558 | 0.3178 |
| 0.0274 | 576.27 | 34000 | 2.9235 | 0.5520 | 0.3257 |
| 0.0361 | 610.17 | 36000 | 3.3828 | 0.5762 | 0.3312 |
| 0.02 | 644.07 | 38000 | 3.3822 | 0.5874 | 0.3466 |
| 0.0176 | 677.97 | 40000 | 3.1191 | 0.5539 | 0.3209 |
| 0.0181 | 711.86 | 42000 | 3.2022 | 0.5576 | 0.3237 |
| 0.0124 | 745.76 | 44000 | 3.2265 | 0.5465 | 0.3162 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
PanNorek/roberta-base | a1d5f8de68318311c3ee14fce16c635d6bc00c6f | 2022-07-23T20:22:28.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | PanNorek | null | PanNorek/roberta-base | 7 | null | transformers | 14,691 | Entry not found |
circulus/kobart-trans-chungcheong-v1 | f677a023bf94c1fe905e9c4694940602ec5c21f0 | 2022-07-25T06:47:00.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | circulus | null | circulus/kobart-trans-chungcheong-v1 | 7 | null | transformers | 14,692 | KoBART 기반 충청도 사투리 스타일 변경
- AI-HUB 의 충청도 사투리 데이터 셋을 통해 훈련되었습니다.
- 사용방법은 곧 올리도록 하겠습니다. |
Splend1dchan/t5-large-squad | 777f52fcbf9c21c6cefef3ed86509f08c4d25e76 | 2022-07-25T03:29:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Splend1dchan | null | Splend1dchan/t5-large-squad | 7 | null | transformers | 14,693 | Entry not found |
SebOchs/xtremedistil-l6-h256-uncased-squad | 8867a9073c69f098c064ec7172ec4d563817ea89 | 2022-07-25T06:33:58.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"dataset:SQuAD",
"transformers",
"license:mit",
"autotrain_compatible"
]
| question-answering | false | SebOchs | null | SebOchs/xtremedistil-l6-h256-uncased-squad | 7 | null | transformers | 14,694 | ---
language:
- en
tags:
- question-answering
license: mit
datasets:
- SQuAD
metrics:
- EM
- F1
---
# Test model for DL4NLP 2022 HW06
xtremedistil-l6-h256-uncased trained on SQuAD
## Hyper parameters
- learning rate: 1e-5
- weight decay: 0.01
- warm up steps: 0
- learning rate scheduler: linear
- epochs: 1
## Metric results on the dev set
- F1: 65.48
- EM: 51.67 |
rvignav/clip-vit-base-patch32-demo | 50b82a6e0270c3db1d50d05a9a1575292861bc72 | 2022-07-27T19:50:36.000Z | [
"pytorch",
"clip",
"feature-extraction",
"transformers"
]
| feature-extraction | false | rvignav | null | rvignav/clip-vit-base-patch32-demo | 7 | null | transformers | 14,695 | Entry not found |
ben-yu/autotrain-MS2-1173943517 | 96be29757af94c4f062799445d7c028bc67c5ec4 | 2022-07-25T01:31:42.000Z | [
"pytorch",
"led",
"text2text-generation",
"unk",
"dataset:ben-yu/autotrain-data-MS2",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | ben-yu | null | ben-yu/autotrain-MS2-1173943517 | 7 | null | transformers | 14,696 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ben-yu/autotrain-data-MS2
co2_eq_emissions: 0.687008092853648
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1173943517
- CO2 Emissions (in grams): 0.687008092853648
## Validation Metrics
- Loss: 2.806302070617676
- Rouge1: 0.0342
- Rouge2: 0.006
- RougeL: 0.0242
- RougeLsum: 0.0283
- Gen Len: 19.9989
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ben-yu/autotrain-MS2-1173943517
``` |
clevrly/xlnet-base-mnli-finetuned | 39d7e31dc2db8c31fef18f9a9de959eea7f1e693 | 2022-07-25T16:25:12.000Z | [
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | clevrly | null | clevrly/xlnet-base-mnli-finetuned | 7 | null | transformers | 14,697 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: xlnet-base-mnli-finetuned
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9118695873662761
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-mnli-finetuned
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3456
- Accuracy: 0.9119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.336 | 1.0 | 49087 | 0.3299 | 0.9010 |
| 0.2582 | 2.0 | 98174 | 0.3456 | 0.9119 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Maxaontrix/bert-base-NER-reptile-5-datasets-finetuned-ner | c82e42d3852cddef294a14cb930a9d11a08cd07d | 2022-07-26T07:23:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:skript",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Maxaontrix | null | Maxaontrix/bert-base-NER-reptile-5-datasets-finetuned-ner | 7 | null | transformers | 14,698 | ---
tags:
- generated_from_trainer
datasets:
- skript
model-index:
- name: bert-base-NER-reptile-5-datasets-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-NER-reptile-5-datasets-finetuned-ner
This model is a fine-tuned version of [sberbank-ai/bert-base-NER-reptile-5-datasets](https://huggingface.co/sberbank-ai/bert-base-NER-reptile-5-datasets) on the skript dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 298 | 0.4198 | 0.6385 | 0.5297 | 0.5790 | 0.8699 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
peter2000/bmz_topics | 7c41ce743108ec1f81c68ba545e54c82bd8a9761 | 2022-07-25T12:03:16.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | peter2000 | null | peter2000/bmz_topics | 7 | null | sentence-transformers | 14,699 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# peter2000/bmz_topics
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('peter2000/bmz_topics')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('peter2000/bmz_topics')
model = AutoModel.from_pretrained('peter2000/bmz_topics')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=peter2000/bmz_topics)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 76 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.dataloader._InfiniteConstantSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.BatchHardTripletLoss.BatchHardTripletLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1520,
"warmup_steps": 152,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.