modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
sgugger/bert-sharded | e44c936c3858495e1fe46ab1aed01d8a4a15114c | 2022-03-22T17:42:29.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | sgugger | null | sgugger/bert-sharded | 11 | null | transformers | 11,200 | Entry not found |
Yaxin/xlm-roberta-base-yelp-mlm | f44a8c6c6edf028c2a603bf0f7bbb7653f3ac09d | 2022-03-24T04:44:37.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"dataset:yelp_review_full",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | Yaxin | null | Yaxin/xlm-roberta-base-yelp-mlm | 11 | null | transformers | 11,201 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-yelp-mlm
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: yelp_review_full yelp_review_full
type: yelp_review_full
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.7356223359340127
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-yelp-mlm
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the yelp_review_full yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1743
- Accuracy: 0.7356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Thant123/distilbert-base-uncased-finetuned-emotion | 9c469ce5d1a3b99cdc73e52702b52af2d2cb9ee1 | 2022-03-24T12:17:39.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Thant123 | null | Thant123/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,202 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9241019999324234
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2270
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8204 | 1.0 | 250 | 0.3160 | 0.9035 | 0.9008 |
| 0.253 | 2.0 | 500 | 0.2270 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
celine98/canine-c-finetuned-sst2 | e2ed997246d3612291f2ed1e6de408829cfe9284 | 2022-04-02T19:11:13.000Z | [
"pytorch",
"tensorboard",
"canine",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | celine98 | null | celine98/canine-c-finetuned-sst2 | 11 | null | transformers | 11,203 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: canine-c-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8486238532110092
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine-c-finetuned-sst2
This model is a fine-tuned version of [google/canine-c](https://huggingface.co/google/canine-c) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6025
- Accuracy: 0.8486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.9121586874695155e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3415 | 1.0 | 2105 | 0.4196 | 0.8280 |
| 0.2265 | 2.0 | 4210 | 0.4924 | 0.8211 |
| 0.1439 | 3.0 | 6315 | 0.5726 | 0.8337 |
| 0.0974 | 4.0 | 8420 | 0.6025 | 0.8486 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
cdinh2022/distilbert-base-uncased-finetuned-emotion | 961aae04da6d585b842ddf49e1cba25faab11baa | 2022-03-24T21:44:36.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | cdinh2022 | null | cdinh2022/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,204 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 0.1 | 25 | 1.4889 | 0.5195 | 0.3976 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
patrickvonplaten/deberta_amazon_reviews_v1 | 06f020e0dbf909570eb886423bd3af256b855546 | 2022-03-25T17:57:32.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | patrickvonplaten | null | patrickvonplaten/deberta_amazon_reviews_v1 | 11 | null | transformers | 11,205 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deberta_amazon_reviews_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta_amazon_reviews_v1
This model is a fine-tuned version of [patrickvonplaten/deberta_v3_amazon_reviews](https://huggingface.co/patrickvonplaten/deberta_v3_amazon_reviews) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 2
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
hackathon-pln-es/electricidad-small-discriminator-finetuned-clasificacion-comentarios-suicidas | dadb12960f1cc549fe296e86c8362abd3b424451 | 2022-04-01T01:49:15.000Z | [
"pytorch",
"electra",
"text-classification",
"es",
"transformers",
"generated_from_trainer",
"sentiment",
"emotion",
"suicide",
"depresión",
"suicidio",
"español",
"spanish",
"depression",
"license:apache-2.0",
"model-index"
]
| text-classification | false | hackathon-pln-es | null | hackathon-pln-es/electricidad-small-discriminator-finetuned-clasificacion-comentarios-suicidas | 11 | 9 | transformers | 11,206 | ---
license: apache-2.0
language: "es"
tags:
- generated_from_trainer
- sentiment
- emotion
- suicide
- depresión
- suicidio
- español
- es
- spanish
- depression
widget:
- text: "La vida no merece la pena"
example_title: "Ejemplo 1"
- text: "Para vivir así lo mejor es estar muerto"
example_title: "Ejemplo 2"
- text: "me siento triste por no poder viajar"
example_title: "Ejemplo 3"
- text: "Quiero terminar con todo"
example_title: "Ejemplo 4"
- text: "Disfruto de la vista"
example_title: "Ejemplo 5"
metrics:
- accuracy
model-index:
- name: electricidad-small-discriminator-finetuned-clasificacion-comentarios-suicidas
results: []
---
# electricidad-small-discriminator-finetuned-clasificacion-comentarios-suicidas
El presente modelo se encentra basado en una versión mejorada de [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator), y con el uso de la base de datos [hackathon-pln-es/comentarios_depresivos](https://huggingface.co/datasets/hackathon-pln-es/comentarios_depresivos).
Siendo de esta manera los resultados obtenidos en la evaluación del modelo:
- Pérdida 0.0458
- Precisión: 0.9916
## Autores
- Danny Vásquez
- César Salazar
- Alexis Cañar
- Yannela Castro
- Daniel Patiño
## Descripción del Modelo
electricidad-small-discriminator-finetuned-clasificacion-comentarios-suicidas es un modelo Transformers pre-entrenado bajo un largo corpus de comentarios obtenidos de REDDIT traducidos al español, con el fin de poder predecir si un comentario tiene una tendencia suicida en base al contexto. Por ende, recibirá una ENTRADA en la cuál se ingresará el texto a comprobar, para posteriormente obtener como única SALIDA de igual manera dos posibles opciones: “Suicida” o “No Suicida”.
## Motivación
Siendo la principal inspiración del modelo que sea utilizado para futuros proyectos que busquen detectar los casos de depresión a tiempo mediante el procesamiento del lenguaje natural, para poder prevenir los casos de suicido en niños, jóvenes y adultos.
## ¿Cómo usarlo?
El modelo puede ser utilizado de manera directa mediante la importación de la librería pipeline de transformers:
```python
>>> from transformers import pipeline
>>> model_name= 'hackathon-pln-es/electricidad-small-discriminator-finetuned-clasificacion-comentarios-suicidas'
>>> cls= pipeline("text-classification", model=model_name)
>>> cls(“Estoy feliz”)[0]['label']
[{'resultado': "No Suicida"
}]
>>> cls(“Quiero acabar con todo”)[0]['label']
[{'resultado': " Suicida"
}]
```
## Proceso de entrenamiento
### Datos de entrenamiento
Como se declaró anteriormente, el modelo se pre-entrenó basándose en la base de datos [comentarios_depresivos]( https://huggingface.co/datasets/hackathon-pln-es/comentarios_depresivos), el cuál posee una cantidad de 192 347 filas de datos para el entrenamiento, 33 944 para las pruebas y 22630 para la validación.
### Hiper parámetros de entrenamiento
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- lr_scheduler_type: linear
- num_epochs: 15
### Resultados del entrenamiento
| Pérdida_entrenamiento | Epoch | Pérdida_Validación | Presición |
|:-------------:|:-----:|:---------------:|:--------:|
| 0.161100 | 1.0 | 0.133057 | 0.952718 |
| 0.134500 | 2.0 | 0.110966 | 0.960804 |
| 0.108500 | 3.0 | 0.086417 | 0.970835 |
| 0.099400 | 4.0 | 0.073618 | 0.974856 |
| 0.090500 | 5.0 | 0.065231 | 0.979629 |
| 0.080700 | 6.0 | 0.060849 | 0.982324 |
| 0.069200 | 7.0 | 0.054718 | 0.986125 |
| 0.060400 | 8.0 | 0.051153 | 0.985948 |
| 0.048200 | 9.0 | 0.045747 | 0.989748 |
| 0.045500 | 10.0 | 0.049992 | 0.988069 |
| 0.043400 | 11.0 | 0.046325 | 0.990234 |
| 0.034300 | 12.0 | 0.050746 | 0.989792 |
| 0.032900 | 13.0 | 0.043434 | 0.991737 |
| 0.028400 | 14.0 | 0.045003 | 0.991869 |
| 0.022300 | 15.0 | 0.045819 | 0.991648 |
### Versiones del Framework
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
## Citación BibTeX
```bibtex
@article{ccs_2022,
author = {Danny Vásquez and
César Salazar and
Alexis Cañar and
Yannela Castro and
Daniel Patiño},
title = {Modelo Electricidad-small-discriminator-finetuned-clasificacion-comentarios-suicidas},
journal = {Huggingface},
year = {2022},
}
```
<h3>Visualizar en GRADIO:</h3>
<a href="https://huggingface.co/spaces/hackathon-pln-es/clasificador-comentarios-suicidas">
<img width="300px" src="https://hf.space/embed/hackathon-pln-es/clasificador-comentarios-suicidas/static/img/logo.svg">
</a>
---
|
imyday/distilbert-base-uncased-finetuned-emotion | c43f9643670600cb4578e3f1f440895cc69a4f39 | 2022-03-27T06:59:25.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | imyday | null | imyday/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,207 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9233039604362318
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2282
- Accuracy: 0.923
- F1: 0.9233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8344 | 1.0 | 250 | 0.3317 | 0.8995 | 0.8953 |
| 0.2606 | 2.0 | 500 | 0.2282 | 0.923 | 0.9233 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
alefiury/wav2vec2-large-xlsr-53-coraa-brazilian-portuguese-gain-normalization | 0d215267ace7b5d297b512e59918cae803780f3e | 2022-04-05T16:58:36.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:CORAA",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:voxforge",
"transformers",
"audio",
"speech",
"portuguese-speech-corpus",
"PyTorch",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | alefiury | null | alefiury/wav2vec2-large-xlsr-53-coraa-brazilian-portuguese-gain-normalization | 11 | null | transformers | 11,208 | ---
language: pt
datasets:
- CORAA
- common_voice
- mls
- cetuc
- voxforge
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
model-index:
- name: Alef Iury XLSR Wav2Vec2 Large 53 Portuguese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test CORAA WER
type: wer
value: 24.89%
---
# Wav2vec 2.0 trained with CORAA Portuguese Dataset and Open Portuguese Datasets
This a the demonstration of a fine-tuned Wav2vec model for Portuguese using the following datasets:
- [CORAA dataset](https://github.com/nilc-nlp/CORAA)
- [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz).
- [Multilingual Librispeech (MLS)](http://www.openslr.org/94/).
- [VoxForge](http://www.voxforge.org/).
- [Common Voice 6.1](https://commonvoice.mozilla.org/pt).
## Repository
The repository that implements the model to be trained and tested is avaible [here](https://github.com/alefiury/SE-R_2022_Challenge_Wav2vec2). |
shrishail/t5_paraphrase_msrp_paws | 7b81cedc3e75d51475603b2bf35c3511ccb97513 | 2022-03-30T05:47:27.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"transformers",
"paraphrase-generation",
"text-generation",
"Conditional Generation",
"autotrain_compatible"
]
| text-generation | false | shrishail | null | shrishail/t5_paraphrase_msrp_paws | 11 | null | transformers | 11,209 | ---
language: "en"
tags:
- paraphrase-generation
- text-generation
- Conditional Generation
inference: false
---
# Simple model for Paraphrase Generation
## Model description
T5-based model for generating paraphrased sentences. It is trained on the labeled [MSRP](https://www.microsoft.com/en-us/download/details.aspx?id=52398) and [Google PAWS](https://github.com/google-research-datasets/paws) dataset.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("shrishail/t5_paraphrase_msrp_paws")
model = AutoModelForSeq2SeqLM.from_pretrained("shrishail/t5_paraphrase_msrp_paws")
sentence = "This is something which i cannot understand at all"
text = "paraphrase: " + sentence + " </s>"
encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda")
outputs = model.generate(
input_ids=input_ids, attention_mask=attention_masks,
max_length=256,
do_sample=True,
top_k=120,
top_p=0.95,
early_stopping=True,
num_return_sequences=5
)
for output in outputs:
line = tokenizer.decode(output, skip_special_tokens=True,clean_up_tokenization_spaces=True)
print(line)
```
|
hackathon-pln-es/electricidad-base-generator-fake-news | b4738d0660cd0e74e9e8a151ef236d9be6c16fc6 | 2022-04-04T04:04:01.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | hackathon-pln-es | null | hackathon-pln-es/electricidad-base-generator-fake-news | 11 | null | transformers | 11,210 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: electricidad-base-generator-fake-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electricidad-base-generator-fake-news
This model is a fine-tuned version of [mrm8488/electricidad-base-generator](https://huggingface.co/mrm8488/electricidad-base-generator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0067
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1136 | 1.0 | 180 | 0.0852 | 1.0 |
| 0.0267 | 2.0 | 360 | 0.0219 | 1.0 |
| 0.0132 | 3.0 | 540 | 0.0108 | 1.0 |
| 0.0091 | 4.0 | 720 | 0.0075 | 1.0 |
| 0.0077 | 5.0 | 900 | 0.0067 | 1.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
binay1999/bert-finetuned-ner | 6f270a7bcbeb21c78eedabb5083f134c9b37d3fc | 2022-03-31T05:10:40.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | binay1999 | null | binay1999/bert-finetuned-ner | 11 | null | transformers | 11,211 | Entry not found |
thaind/layoutlmv2-jaen-gemai | d3cd287c939bd67be0216d13ca10a4e074f85ca9 | 2022-03-31T08:13:42.000Z | [
"pytorch",
"layoutlmv2",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | thaind | null | thaind/layoutlmv2-jaen-gemai | 11 | null | transformers | 11,212 | This is model fine tune from layoutlmv2 model for japanese and english language
|
abdusahmbzuai/aradia-ctc-data2vec-ft | c26cc0ccb23b0cd313550f64d1072703af0e75ed | 2022-04-01T08:19:29.000Z | [
"pytorch",
"data2vec-audio",
"automatic-speech-recognition",
"transformers",
"abdusahmbzuai/arabic_speech_massive_300hrs",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | abdusahmbzuai | null | abdusahmbzuai/aradia-ctc-data2vec-ft | 11 | null | transformers | 11,213 | ---
tags:
- automatic-speech-recognition
- abdusahmbzuai/arabic_speech_massive_300hrs
- generated_from_trainer
model-index:
- name: aradia-ctc-data2vec-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aradia-ctc-data2vec-ft
This model is a fine-tuned version of [/l/users/abdulwahab.sahyoun/aradia/aradia-ctc-data2vec-ft](https://huggingface.co//l/users/abdulwahab.sahyoun/aradia/aradia-ctc-data2vec-ft) on the ABDUSAHMBZUAI/ARABIC_SPEECH_MASSIVE_300HRS - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0464
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| No log | 0.43 | 100 | 3.3600 | 1.0 |
| No log | 0.87 | 200 | 3.0887 | 1.0 |
| No log | 1.3 | 300 | 3.0779 | 1.0 |
| No log | 1.74 | 400 | 3.0551 | 1.0 |
| 4.8553 | 2.17 | 500 | 3.0526 | 1.0 |
| 4.8553 | 2.61 | 600 | 3.0560 | 1.0 |
| 4.8553 | 3.04 | 700 | 3.1251 | 1.0 |
| 4.8553 | 3.48 | 800 | 3.0870 | 1.0 |
| 4.8553 | 3.91 | 900 | 3.0822 | 1.0 |
| 3.1133 | 4.35 | 1000 | 3.0484 | 1.0 |
| 3.1133 | 4.78 | 1100 | 3.0558 | 1.0 |
| 3.1133 | 5.22 | 1200 | 3.1019 | 1.0 |
| 3.1133 | 5.65 | 1300 | 3.0914 | 1.0 |
| 3.1133 | 6.09 | 1400 | 3.0691 | 1.0 |
| 3.109 | 6.52 | 1500 | 3.0589 | 1.0 |
| 3.109 | 6.95 | 1600 | 3.0508 | 1.0 |
| 3.109 | 7.39 | 1700 | 3.0540 | 1.0 |
| 3.109 | 7.82 | 1800 | 3.0546 | 1.0 |
| 3.109 | 8.26 | 1900 | 3.0524 | 1.0 |
| 3.1106 | 8.69 | 2000 | 3.0569 | 1.0 |
| 3.1106 | 9.13 | 2100 | 3.0622 | 1.0 |
| 3.1106 | 9.56 | 2200 | 3.0518 | 1.0 |
| 3.1106 | 10.0 | 2300 | 3.0749 | 1.0 |
| 3.1106 | 10.43 | 2400 | 3.0698 | 1.0 |
| 3.1058 | 10.87 | 2500 | 3.0665 | 1.0 |
| 3.1058 | 11.3 | 2600 | 3.0555 | 1.0 |
| 3.1058 | 11.74 | 2700 | 3.0589 | 1.0 |
| 3.1058 | 12.17 | 2800 | 3.0611 | 1.0 |
| 3.1058 | 12.61 | 2900 | 3.0561 | 1.0 |
| 3.1071 | 13.04 | 3000 | 3.0480 | 1.0 |
| 3.1071 | 13.48 | 3100 | 3.0492 | 1.0 |
| 3.1071 | 13.91 | 3200 | 3.0574 | 1.0 |
| 3.1071 | 14.35 | 3300 | 3.0538 | 1.0 |
| 3.1071 | 14.78 | 3400 | 3.0505 | 1.0 |
| 3.1061 | 15.22 | 3500 | 3.0600 | 1.0 |
| 3.1061 | 15.65 | 3600 | 3.0596 | 1.0 |
| 3.1061 | 16.09 | 3700 | 3.0623 | 1.0 |
| 3.1061 | 16.52 | 3800 | 3.0800 | 1.0 |
| 3.1061 | 16.95 | 3900 | 3.0583 | 1.0 |
| 3.1036 | 17.39 | 4000 | 3.0534 | 1.0 |
| 3.1036 | 17.82 | 4100 | 3.0563 | 1.0 |
| 3.1036 | 18.26 | 4200 | 3.0481 | 1.0 |
| 3.1036 | 18.69 | 4300 | 3.0477 | 1.0 |
| 3.1036 | 19.13 | 4400 | 3.0505 | 1.0 |
| 3.1086 | 19.56 | 4500 | 3.0485 | 1.0 |
| 3.1086 | 20.0 | 4600 | 3.0481 | 1.0 |
| 3.1086 | 20.43 | 4700 | 3.0615 | 1.0 |
| 3.1086 | 20.87 | 4800 | 3.0658 | 1.0 |
| 3.1086 | 21.3 | 4900 | 3.0505 | 1.0 |
| 3.1028 | 21.74 | 5000 | 3.0492 | 1.0 |
| 3.1028 | 22.17 | 5100 | 3.0485 | 1.0 |
| 3.1028 | 22.61 | 5200 | 3.0483 | 1.0 |
| 3.1028 | 23.04 | 5300 | 3.0479 | 1.0 |
| 3.1028 | 23.48 | 5400 | 3.0509 | 1.0 |
| 3.1087 | 23.91 | 5500 | 3.0530 | 1.0 |
| 3.1087 | 24.35 | 5600 | 3.0486 | 1.0 |
| 3.1087 | 24.78 | 5700 | 3.0514 | 1.0 |
| 3.1087 | 25.22 | 5800 | 3.0505 | 1.0 |
| 3.1087 | 25.65 | 5900 | 3.0508 | 1.0 |
| 3.1043 | 26.09 | 6000 | 3.0501 | 1.0 |
| 3.1043 | 26.52 | 6100 | 3.0467 | 1.0 |
| 3.1043 | 26.95 | 6200 | 3.0466 | 1.0 |
| 3.1043 | 27.39 | 6300 | 3.0465 | 1.0 |
| 3.1043 | 27.82 | 6400 | 3.0465 | 1.0 |
| 3.1175 | 28.26 | 6500 | 3.0466 | 1.0 |
| 3.1175 | 28.69 | 6600 | 3.0466 | 1.0 |
| 3.1175 | 29.13 | 6700 | 3.0465 | 1.0 |
| 3.1175 | 29.56 | 6800 | 3.0465 | 1.0 |
| 3.1175 | 30.0 | 6900 | 3.0464 | 1.0 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
blckwdw61/sysformbatches2acs | 3f0dffcc1bc157fd7e5b02c51f34cdb023ddcead | 2022-04-01T02:17:19.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | blckwdw61 | null | blckwdw61/sysformbatches2acs | 11 | null | transformers | 11,214 | # Figured out labels |
antonio-artur/distilbert-base-uncased-finetuned-emotion | f7a336b540cc2b7d182c2c1cbb851716e5507de8 | 2022-04-02T14:26:59.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | antonio-artur | null | antonio-artur/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,215 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9260113300845928
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2280
- Accuracy: 0.926
- F1: 0.9260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8646 | 1.0 | 250 | 0.3326 | 0.9045 | 0.9009 |
| 0.2663 | 2.0 | 500 | 0.2280 | 0.926 | 0.9260 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
lgris/bp_400_xlsr2_1B | db451d136eb3387f2b69d2afb61db73371e8a955 | 2022-04-01T23:52:55.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | lgris | null | lgris/bp_400_xlsr2_1B | 11 | null | transformers | 11,216 | Entry not found |
Sam4669/distilbert-base-uncased-finetuned-emotion | f9d23f924d402bc92f1082bc8cd93953870b628b | 2022-04-02T13:16:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Sam4669 | null | Sam4669/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,217 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9232158277556175
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2317
- Accuracy: 0.923
- F1: 0.9232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8528 | 1.0 | 250 | 0.3332 | 0.897 | 0.8929 |
| 0.26 | 2.0 | 500 | 0.2317 | 0.923 | 0.9232 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Giyaseddin/distilbert-base-cased-finetuned-fake-and-real-news-dataset | bfef5f3b5eff38f01e4bdd3b1b1427401dae190b | 2022-04-03T16:39:39.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:Fake and real news dataset",
"transformers",
"license:gpl-3.0"
]
| text-classification | false | Giyaseddin | null | Giyaseddin/distilbert-base-cased-finetuned-fake-and-real-news-dataset | 11 | null | transformers | 11,218 | ---
license: gpl-3.0
language: en
library: transformers
other: distilbert
datasets:
- Fake and real news dataset
---
# DistilBERT base cased model for Fake News Classification
## Model description
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model.
This is a Fake News classification model finetuned [pretrained DistilBERT model](https://huggingface.co/distilbert-base-cased) on
[Fake and real news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset)
## Intended uses & limitations
This can only be used for the kind of news that are similar to the ones in the dataset,
please visit the [dataset's kaggle page](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset) to see the data.
### How to use
You can use this model directly with a :
```python
>>> from transformers import pipeline
>>> classifier = pipeline("text-classification", model="Giyaseddin/distilbert-base-cased-finetuned-fake-and-real-news-dataset", return_all_scores=True)
>>> examples = ["Yesterday, Speaker Paul Ryan tweeted a video of himself on the Mexican border flying in a helicopter and traveling on horseback with US border agents. RT if you agree It is time for The Wall. pic.twitter.com/s5MO8SG7SL Paul Ryan (@SpeakerRyan) August 1, 2017It makes for great theater to see Republican Speaker Ryan pleading the case for a border wall, but how sincere are the GOP about building the border wall? Even after posting a video that appears to show Ryan s support for the wall, he still seems unsure of himself. It s almost as though he s testing the political winds when he asks Twitter users to retweet if they agree that we need to start building the wall. How committed is the (formerly?) anti-Trump Paul Ryan to building the border wall that would fulfill one of President Trump s most popular campaign promises to the American people? Does he have the what it takes to defy the wishes of corporate donors and the US Chamber of Commerce, and do the right thing for the national security and well-being of our nation?The Last Refuge- Republicans are in control of the House of Representatives, Republicans are in control of the Senate, a Republican President is in the White House, and somehow there s negotiations on how to fund the #1 campaign promise of President Donald Trump, the border wall.Here s the rub.Here s what pundits never discuss.The Republican party doesn t need a single Democrat to fund the border wall.A single spending bill could come from the House of Representatives that fully funds 100% of the border wall. The spending bill then goes to the senate, where again, it doesn t need a single Democrat vote because spending legislation is specifically what reconciliation was designed to facilitate. That House bill can pass the Senate with 51 votes and proceed directly to the President s desk for signature.So, ask yourself: why is this even a point of discussion?The honest answer, for those who are no longer suffering from Battered Conservative Syndrome, is that Republicans don t want to fund or build an actual physical barrier known as the Southern Border Wall.It really is that simple.If one didn t know better, they d almost think Speaker Ryan was attempting to emulate the man he clearly despised during the 2016 presidential campaign."]
>>> classifier(examples)
[[{'label': 'LABEL_0', 'score': 1.0},
{'label': 'LABEL_1', 'score': 1.0119109106199176e-08}]]
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
[the bias of its teacher model](https://huggingface.co/bert-base-uncased#limitations-and-bias).
This bias will also affect all fine-tuned versions of this model.
## Pre-training data
DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset
consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia)
(excluding lists, tables and headers).
## Fine-tuning data
[Fake and real news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset)
## Training procedure
### Preprocessing
In the preprocessing phase, both the title and the text of the news are concatenated using a separator `[SEP]`.
This makes the full text as:
```
[CLS] Title Sentence [SEP] News text body [SEP]
```
The data are splitted according to the following ratio:
- Training set 60%.
- Validation set 20%.
- Test set 20%.
Lables are mapped as: `{fake: 0, true: 1}`
### Fine-tuning
The model was finetuned on GeForce GTX 960M for 5 hours. The parameters are:
| Parameter | Value |
|:-------------------:|:-----:|
| Learning rate | 5e-5 |
| Weight decay | 0.01 |
| Training batch size | 4 |
| Epochs | 3 |
Here is the scores during the training:
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
|:----------:|:-------------:|:-----------------:|:----------:|:---------:|:-----------:|:---------:|
| 1 | 0.008300 | 0.005783 | 0.998330 | 0.998252 | 0.996511 | 1.000000 |
| 2 | 0.000000 | 0.000161 | 0.999889 | 0.999883 | 0.999767 | 1.000000 |
| 3 | 0.000000 | 0.000122 | 0.999889 | 0.999883 | 0.999767 | 1.000000 |
## Evaluation results
When fine-tuned on downstream task of fake news binary classification, this model achieved the following results:
(scores are rounded to 2 floating points)
| | precision | recall | f1-score | support |
|:------------:|:---------:|:------:|:--------:|:-------:|
| Fake | 1.00 | 1.00 | 1.00 | 4697 |
| True | 1.00 | 1.00 | 1.00 | 4283 |
| accuracy | - | - | 1.00 | 8980 |
| macro avg | 1.00 | 1.00 | 1.00 | 8980 |
| weighted avg | 1.00 | 1.00 | 1.00 | 8980 |
Confision matrix:
| Actual\Predicted | Fake | True |
|:-----------------:|:----:|:----:|
| Fake | 4696 | 1 |
| True | 1 | 4282 |
The AUC score is 0.9997
|
hackathon-pln-es/readability-es-3class-paragraphs | f6220c636bf2088177773e3a484f5ade1353ccb0 | 2022-04-04T10:42:19.000Z | [
"pytorch",
"roberta",
"text-classification",
"es",
"transformers",
"spanish",
"bertin",
"license:cc-by-4.0"
]
| text-classification | false | hackathon-pln-es | null | hackathon-pln-es/readability-es-3class-paragraphs | 11 | null | transformers | 11,219 | ---
language: es
license: cc-by-4.0
tags:
- spanish
- roberta
- bertin
pipeline_tag: text-classification
widget:
- text: Las Líneas de Nazca son una serie de marcas trazadas en el suelo, cuya anchura oscila entre los 40 y los 110 centímetros.
- text: Hace mucho tiempo, en el gran océano que baña las costas del Perú no había peces.
---
# Readability ES Paragraphs for three classes
Model based on the Roberta architecture finetuned on [BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) for readability assessment of Spanish texts.
## Description and performance
This version of the model was trained on a mix of datasets, using sentence-level granularity when possible. The model performs classification among three complexity levels:
- Basic.
- Intermediate.
- Advanced.
The relationship of these categories with the Common European Framework of Reference for Languages is described in [our report](https://wandb.ai/readability-es/readability-es/reports/Texts-Readability-Analysis-for-Spanish--VmlldzoxNzU2MDUx).
This model achieves a F1 macro average score of 0.7881, measured on the validation set.
## Model variants
- [`readability-es-sentences`](https://huggingface.co/hackathon-pln-es/readability-es-sentences). Two classes, sentence-based dataset.
- [`readability-es-paragraphs`](https://huggingface.co/hackathon-pln-es/readability-es-paragraphs). Two classes, paragraph-based dataset.
- [`readability-es-3class-sentences`](https://huggingface.co/hackathon-pln-es/readability-es-3class-sentences). Three classes, sentence-based dataset.
- `readability-es-3class-paragraphs` (this model). Three classes, paragraph-based dataset.
## Datasets
- [`readability-es-hackathon-pln-public`](https://huggingface.co/datasets/hackathon-pln-es/readability-es-hackathon-pln-public), composed of:
* coh-metrix-esp corpus.
* Various text resources scraped from websites.
- Other non-public datasets: newsela-es, simplext.
## Training details
Please, refer to [this training run](https://wandb.ai/readability-es/readability-es/runs/22apaysv/overview) for full details on hyperparameters and training regime.
## Biases and Limitations
- Due to the scarcity of data and the lack of a reliable gold test set, performance metrics are reported on the validation set.
- One of the datasets involved is the Spanish version of newsela, which is frequently used as a reference. However, it was created by translating previous datasets, and therefore it may contain somewhat unnatural phrases.
- Some of the datasets used cannot be publicly disseminated, making it more difficult to assess the existence of biases or mistakes.
- Language might be biased towards the Spanish dialect spoken in Spain. Other regional variants might be sub-represented.
- No effort has been performed to alleviate the shortcomings and biases described in the [original implementation of BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish#bias-examples-spanish).
## Authors
- [Laura Vásquez-Rodríguez](https://lmvasque.github.io/)
- [Pedro Cuenca](https://twitter.com/pcuenq)
- [Sergio Morales](https://www.fireblend.com/)
- [Fernando Alva-Manchego](https://feralvam.github.io/)
|
aprilzoo/distilbert-base-uncased-finetuned-emotion | f4db81f22d9428276eee34de57ceace08c85690a | 2022-04-04T05:50:42.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aprilzoo | null | aprilzoo/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,220 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9232474678171817
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2202
- Accuracy: 0.923
- F1: 0.9232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8244 | 1.0 | 250 | 0.3104 | 0.9025 | 0.8997 |
| 0.2478 | 2.0 | 500 | 0.2202 | 0.923 | 0.9232 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Kalaoke/bert-finetuned-sentiment | 7bb3107c7588fa8d016091b289330fa5779d4094 | 2022-04-16T09:54:26.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | Kalaoke | null | Kalaoke/bert-finetuned-sentiment | 11 | null | transformers | 11,221 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-finetuned-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-sentiment
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4884
- Accuracy: 0.7698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6778 | 1.0 | 722 | 0.7149 | 0.7482 |
| 0.3768 | 2.0 | 1444 | 0.9821 | 0.7410 |
| 0.1612 | 3.0 | 2166 | 1.4027 | 0.7662 |
| 0.094 | 4.0 | 2888 | 1.4884 | 0.7698 |
| 0.0448 | 5.0 | 3610 | 1.6463 | 0.7590 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
HenryHXR/scibert_scivocab_uncased-finetuned-ner | e7868331f4685b15ad8da241a830bbf820fbbd28 | 2022-04-05T15:24:38.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | HenryHXR | null | HenryHXR/scibert_scivocab_uncased-finetuned-ner | 11 | null | transformers | 11,222 | ---
tags:
- generated_from_trainer
model-index:
- name: scibert_scivocab_uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert_scivocab_uncased-finetuned-ner
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Shadman-Rohan/distilbert-base-uncased-finetuned-emotion | 5b5678fa6c52b52d3ec164acba12b22d70e9a0cf | 2022-04-05T20:40:41.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Shadman-Rohan | null | Shadman-Rohan/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,223 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9247907524762314
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2083
- Accuracy: 0.9245
- F1: 0.9248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7794 | 1.0 | 250 | 0.2870 | 0.9115 | 0.9099 |
| 0.2311 | 2.0 | 500 | 0.2083 | 0.9245 | 0.9248 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Stremie/roberta-base-clickbait-keywords | 5f5b8247ff6e1e3a973b27a059cbf1413b5a6e25 | 2022-04-18T12:52:44.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Stremie | null | Stremie/roberta-base-clickbait-keywords | 11 | null | transformers | 11,224 | This model classifies whether a tweet is clickbait or not. It has been trained using [Webis-Clickbait-17](https://webis.de/data/webis-clickbait-17.html) dataset. Input is composed of 'postText' + '[SEP]' + 'targetKeywords'. Achieved ~0.7 F1-score on test data. |
Sleoruiz/distilbert-base-uncased-finetuned-emotion | 7cba274bbba91b6dc3c4c5b78cd216fda02e3db7 | 2022-04-07T06:34:58.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Sleoruiz | null | Sleoruiz/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,225 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9273201074587852
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2176
- Accuracy: 0.927
- F1: 0.9273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8252 | 1.0 | 250 | 0.3121 | 0.916 | 0.9140 |
| 0.2471 | 2.0 | 500 | 0.2176 | 0.927 | 0.9273 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
palakagl/Roberta_Multiclass_TextClassification | 2740496d084e8649d34d097bf70cfb6b1f15541b | 2022-04-07T17:15:10.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:palakagl/autotrain-data-PersonalAssitant",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | palakagl | null | palakagl/Roberta_Multiclass_TextClassification | 11 | null | transformers | 11,226 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- palakagl/autotrain-data-PersonalAssitant
co2_eq_emissions: 0.014567637985425905
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 717221783
- CO2 Emissions (in grams): 0.014567637985425905
## Validation Metrics
- Loss: 0.38848456740379333
- Accuracy: 0.9180509413067552
- Macro F1: 0.9157418163085091
- Micro F1: 0.9180509413067552
- Weighted F1: 0.9185290137253468
- Macro Precision: 0.9189981206383326
- Micro Precision: 0.9180509413067552
- Weighted Precision: 0.9221607328493303
- Macro Recall: 0.9158232837734661
- Micro Recall: 0.9180509413067552
- Weighted Recall: 0.9180509413067552
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/palakagl/autotrain-PersonalAssitant-717221783
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("palakagl/autotrain-PersonalAssitant-717221783", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("palakagl/autotrain-PersonalAssitant-717221783", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
nielsr/segformer-test-v5 | 4e96814ca2cdaed1d3154badd6fcd38f53b0a9f9 | 2022-04-08T15:05:50.000Z | [
"pytorch",
"segformer",
"dataset:segments/sidewalk-semantic",
"transformers",
"vision",
"image-segmentation",
"license:apache-2.0"
]
| image-segmentation | false | nielsr | null | nielsr/segformer-test-v5 | 11 | null | transformers | 11,227 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
datasets:
- segments/sidewalk-semantic
widget:
- src: https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg
example_title: Brugge
--- |
amrita03/wikineural-multilingual-ner | 3d5e0c242bbfed1b5dd21f2a381af3216fde6d8c | 2022-04-11T15:34:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | amrita03 | null | amrita03/wikineural-multilingual-ner | 11 | null | transformers | 11,228 | Entry not found |
brad1141/baseline_bertv3 | 547ef8fb339abb40e991cd7277df04d93963e863 | 2022-04-10T13:16:14.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | brad1141 | null | brad1141/baseline_bertv3 | 11 | null | transformers | 11,229 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: baseline_bertv3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baseline_bertv3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Shivanand/wikineural-multilingual-ner | 8eacdca39e7aef59bb3c9fb271e3cec87b8a23b8 | 2022-04-11T21:15:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Shivanand | null | Shivanand/wikineural-multilingual-ner | 11 | null | transformers | 11,230 | Entry not found |
Toshifumi/distilbert-base-uncased-finetuned-emotion | ecb89630a03c0751bb359245c1f904d56a1feb71 | 2022-04-13T09:56:27.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Toshifumi | null | Toshifumi/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,231 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9271941874206031
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2106
- Accuracy: 0.927
- F1: 0.9272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8009 | 1.0 | 250 | 0.2968 | 0.912 | 0.9102 |
| 0.24 | 2.0 | 500 | 0.2106 | 0.927 | 0.9272 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Xenova/sponsorblock-classifier-v2 | 3fd1e1e46d62f4a189d0b5ce3d4c3770bcff7a0a | 2022-04-17T18:00:43.000Z | [
"pytorch",
"bert",
"text-classification",
"generic"
]
| text-classification | false | Xenova | null | Xenova/sponsorblock-classifier-v2 | 11 | null | generic | 11,232 | ---
tags:
- text-classification
- generic
library_name: generic
widget:
- text: 'This video is sponsored by squarespace'
example_title: Sponsor
- text: 'Check out the merch at linustechtips.com'
example_title: Unpaid/self promotion
- text: "Don't forget to like, comment and subscribe"
example_title: Interaction reminder
- text: 'pqh4LfPeCYs,824.695,826.267,826.133,829.876,835.933,927.581'
example_title: Extract text from video
---
|
SiriusRen/my-rubbish-model | cc80313e8fdbe1cdb3186b0973ca992cd9ff15e9 | 2022-04-14T07:11:40.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SiriusRen | null | SiriusRen/my-rubbish-model | 11 | null | transformers | 11,233 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: my-rubbish-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-rubbish-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0
- Datasets 2.0.1.dev0
- Tokenizers 0.11.6
|
luquesky/distilbert-base-uncased-finetuned-emotion | 4ed1f8a48479262a92e36f9a9fba24233bfdf767 | 2022-04-14T17:48:19.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | luquesky | null | luquesky/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,234 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.934
- name: F1
type: f1
value: 0.9337817808480242
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2155
- Accuracy: 0.934
- F1: 0.9338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1768 | 1.0 | 250 | 0.1867 | 0.924 | 0.9235 |
| 0.1227 | 2.0 | 500 | 0.1588 | 0.934 | 0.9346 |
| 0.1031 | 3.0 | 750 | 0.1656 | 0.931 | 0.9306 |
| 0.0843 | 4.0 | 1000 | 0.1662 | 0.9395 | 0.9392 |
| 0.0662 | 5.0 | 1250 | 0.1714 | 0.9325 | 0.9326 |
| 0.0504 | 6.0 | 1500 | 0.1821 | 0.934 | 0.9338 |
| 0.0429 | 7.0 | 1750 | 0.2038 | 0.933 | 0.9324 |
| 0.0342 | 8.0 | 2000 | 0.2054 | 0.938 | 0.9379 |
| 0.0296 | 9.0 | 2250 | 0.2128 | 0.9345 | 0.9345 |
| 0.0211 | 10.0 | 2500 | 0.2155 | 0.934 | 0.9338 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
lewtun/MiniLMv2-L12-H384-distilled-finetuned-clinc | 63195386e6cfcc5f5d3c3bae998acb3c666f267e | 2022-04-25T14:10:02.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | lewtun | null | lewtun/MiniLMv2-L12-H384-distilled-finetuned-clinc | 11 | null | transformers | 11,235 | ---
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: MiniLMv2-L12-H384-distilled-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
revision: b189f1fa78f41282a748b673231c21dfb07182b5
metrics:
- name: Accuracy
type: accuracy
value: 0.9529032258064516
verified: false
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L12-H384-distilled-finetuned-clinc
This model is a fine-tuned version of [nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3058
- Accuracy: 0.9529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9908 | 1.0 | 239 | 1.6816 | 0.3910 |
| 1.5212 | 2.0 | 478 | 1.2365 | 0.7697 |
| 1.129 | 3.0 | 717 | 0.9209 | 0.8706 |
| 0.8462 | 4.0 | 956 | 0.6978 | 0.9152 |
| 0.6497 | 5.0 | 1195 | 0.5499 | 0.9342 |
| 0.5124 | 6.0 | 1434 | 0.4447 | 0.9445 |
| 0.4196 | 7.0 | 1673 | 0.3797 | 0.9455 |
| 0.3587 | 8.0 | 1912 | 0.3358 | 0.95 |
| 0.3228 | 9.0 | 2151 | 0.3133 | 0.9513 |
| 0.3052 | 10.0 | 2390 | 0.3058 | 0.9529 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
flood/distilbert-base-uncased-finetuned-emotion | 3e8a74238b4335587ca3740ea56c5407090b7405 | 2022-05-27T07:34:17.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | flood | null | flood/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,236 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: F1
type: f1
value: 0.9334621346059612
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1698
- Accuracy : 0.933
- F1: 0.9335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.6265 | 1.0 | 500 | 0.2137 | 0.926 | 0.9256 |
| 0.1795 | 2.0 | 1000 | 0.1698 | 0.933 | 0.9335 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
MartinoMensio/racism-models-raw-label-epoch-4 | ffc8ad492bc87e476619082ab7cd0cac0d49aebb | 2022-05-04T16:06:20.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"transformers",
"license:mit"
]
| text-classification | false | MartinoMensio | null | MartinoMensio/racism-models-raw-label-epoch-4 | 11 | null | transformers | 11,237 | ---
language: es
license: mit
widget:
- text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!"
---
### Description
This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022)
We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022)
We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models:
| method | epoch 1 | epoch 3 | epoch 3 | epoch 4 |
|--- |--- |--- |--- |--- |
| raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) |
| m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) |
| m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) |
| regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) |
| w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) |
| w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) |
This model is `raw-label-epoch-4`
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_name = 'raw-label-epoch-4'
tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
full_model_path = f'MartinoMensio/racism-models-{model_name}'
model = AutoModelForSequenceClassification.from_pretrained(full_model_path)
pipe = pipeline("text-classification", model = model, tokenizer = tokenizer)
texts = [
'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!',
'Es que los judíos controlan el mundo'
]
print(pipe(texts))
# [{'label': 'racist', 'score': 0.921501636505127}, {'label': 'non-racist', 'score': 0.9459075331687927}]
```
For more details, see https://github.com/preyero/neatclass22
|
MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4 | 6022ba170584f5eab3c4eed86252494a7993a516 | 2022-05-04T16:29:35.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"transformers",
"license:mit"
]
| text-classification | false | MartinoMensio | null | MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4 | 11 | null | transformers | 11,238 | ---
language: es
license: mit
widget:
- text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!"
---
### Description
This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022)
We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022)
We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models:
| method | epoch 1 | epoch 3 | epoch 3 | epoch 4 |
|--- |--- |--- |--- |--- |
| raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) |
| m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) |
| m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) |
| regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) |
| w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) |
| w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) |
This model is `w-m-vote-nonstrict-epoch-4`
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_name = 'w-m-vote-nonstrict-epoch-4'
tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
full_model_path = f'MartinoMensio/racism-models-{model_name}'
model = AutoModelForSequenceClassification.from_pretrained(full_model_path)
pipe = pipeline("text-classification", model = model, tokenizer = tokenizer)
texts = [
'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!',
'Es que los judíos controlan el mundo'
]
print(pipe(texts))
# [{'label': 'racist', 'score': 0.996863842010498}, {'label': 'non-racist', 'score': 0.9982976317405701}]
```
For more details, see https://github.com/preyero/neatclass22
|
Artyom/ArmSpellcheck_beta | ecbb5813f74cf73a1604f70d07b43e17b252bc52 | 2022-05-02T09:54:31.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Artyom | null | Artyom/ArmSpellcheck_beta | 11 | null | transformers | 11,239 | Entry not found |
ShreyaR/finetuned-distil-bert-depression | c0eba014619e85b72fc2a8c4efc66b03be4483d2 | 2022-05-03T20:44:08.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | ShreyaR | null | ShreyaR/finetuned-distil-bert-depression | 11 | null | transformers | 11,240 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuned-distil-bert-depression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-distil-bert-depression
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1695
- Accuracy: 0.9445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0243 | 1.0 | 625 | 0.2303 | 0.9205 |
| 0.0341 | 2.0 | 1250 | 0.1541 | 0.933 |
| 0.0244 | 3.0 | 1875 | 0.1495 | 0.9445 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
theta/Argument_Type_Bert | 1d837cddb63efae0c0e48b46ee3c5b6dea4454a8 | 2022-07-11T14:29:03.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"transformers",
"Argument_Type_Bert",
"zh-tw",
"generated_from_trainer",
"model-index"
]
| text-classification | false | theta | null | theta/Argument_Type_Bert | 11 | null | transformers | 11,241 | ---
language:
- zh
tags:
- Argument_Type_Bert
- zh
- zh-tw
- generated_from_trainer
model-index:
- name: Argument_Type_Bert
results: []
---
這邊是開發分支,不穩定。 |
gzomer/claim-spotter-multilingual | 79e06688513ce607df5c29b0b229a2706d1969cd | 2022-04-17T18:04:09.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | gzomer | null | gzomer/claim-spotter-multilingual | 11 | null | transformers | 11,242 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: claim-spotter-multilingual
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# claim-spotter-multilingual
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3285
- F1: 0.7996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5098 | 1.0 | 830 | 0.3507 | 0.7779 |
| 0.3577 | 2.0 | 1660 | 0.3285 | 0.7996 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ardallie/distilbert-base-uncased-finetuned-emotion | 9b817c394503b6feefdb6cb3d571c1da0e173cbf | 2022-04-18T03:22:40.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | ardallie | null | ardallie/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,243 | Entry not found |
dfsj/distilbert-base-uncased-finetuned-emotion | fcc1fcaafae3a01a7d38f73e7789ffb7f25e2c65 | 2022-04-18T07:12:17.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | dfsj | null | dfsj/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,244 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9222074564200887
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2170
- Accuracy: 0.922
- F1: 0.9222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8116 | 1.0 | 250 | 0.3076 | 0.9035 | 0.9013 |
| 0.2426 | 2.0 | 500 | 0.2170 | 0.922 | 0.9222 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu102
- Datasets 2.0.0
- Tokenizers 0.12.1
|
rabiaqayyum/autotrain-mental-health-analysis-752423172 | cd50cab9bd84a0601023e9667a55d5b377c6caa3 | 2022-04-19T06:45:00.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:rabiaqayyum/autotrain-data-mental-health-analysis",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | rabiaqayyum | null | rabiaqayyum/autotrain-mental-health-analysis-752423172 | 11 | null | transformers | 11,245 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- rabiaqayyum/autotrain-data-mental-health-analysis
co2_eq_emissions: 313.3534743349287
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 752423172
- CO2 Emissions (in grams): 313.3534743349287
## Validation Metrics
- Loss: 0.6064515113830566
- Accuracy: 0.805171240644137
- Macro F1: 0.7253473044054398
- Micro F1: 0.805171240644137
- Weighted F1: 0.7970679970423672
- Macro Precision: 0.7477679873153633
- Micro Precision: 0.805171240644137
- Weighted Precision: 0.7966263131173029
- Macro Recall: 0.7143231260991618
- Micro Recall: 0.805171240644137
- Weighted Recall: 0.805171240644137
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/rabiaqayyum/autotrain-mental-health-analysis-752423172
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("rabiaqayyum/autotrain-mental-health-analysis-752423172", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("rabiaqayyum/autotrain-mental-health-analysis-752423172", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
GPL/arguana-tsdae-msmarco-distilbert-gpl | 25f2ca96fa0f3d3f9838168389b433e3a500c2b0 | 2022-04-19T15:20:05.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | GPL | null | GPL/arguana-tsdae-msmarco-distilbert-gpl | 11 | null | sentence-transformers | 11,246 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
robkayinto/xlm-roberta-base-finetuned-panx-fr | 0f14785b4b5c9602d0bb5177570e6a64572c7cec | 2022-07-13T18:11:49.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | robkayinto | null | robkayinto/xlm-roberta-base-finetuned-panx-fr | 11 | null | transformers | 11,247 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8299296953465015
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2848
- F1: 0.8299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5989 | 1.0 | 191 | 0.3383 | 0.7928 |
| 0.2617 | 2.0 | 382 | 0.2966 | 0.8318 |
| 0.1672 | 3.0 | 573 | 0.2848 | 0.8299 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
demdecuong/vihealthbert-base-syllable | 419317680eaa513e6cc786f55dd9316d5e446e9a | 2022-04-20T07:57:30.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | demdecuong | null | demdecuong/vihealthbert-base-syllable | 11 | 1 | transformers | 11,248 | # <a name="introduction"></a> ViHealthBERT: Pre-trained Language Models for Vietnamese in Health Text Mining
ViHealthBERT is the a strong baseline language models for Vietnamese in Healthcare domain.
We empirically investigate our model with different training strategies, achieving state of the art (SOTA) performances on 3 downstream tasks: NER (COVID-19 & ViMQ), Acronym Disambiguation, and Summarization.
We introduce two Vietnamese datasets: the acronym dataset (acrDrAid) and the FAQ summarization dataset in the healthcare domain. Our acrDrAid dataset is annotated with 135 sets of keywords.
The general approaches and experimental results of ViHealthBERT can be found in our LREC-2022 Poster [paper]() (updated soon):
@article{vihealthbert,
title = {{ViHealthBERT: Pre-trained Language Models for Vietnamese in Health Text Mining}},
author = {Minh Phuc Nguyen, Vu Hoang Tran, Vu Hoang, Ta Duc Huy, Trung H. Bui, Steven Q. H. Truong },
journal = {13th Edition of its Language Resources and Evaluation Conference},
year = {2022}
}
### Installation <a name="install2"></a>
- Python 3.6+, and PyTorch >= 1.6
- Install `transformers`:
`pip install transformers==4.2.0`
### Pre-trained models <a name="models2"></a>
Model | #params | Arch. | Tokenizer
---|---|---|---
`demdecuong/vihealthbert-base-word` | 135M | base | Word-level
`demdecuong/vihealthbert-base-syllable` | 135M | base | Syllable-level
### Example usage <a name="usage1"></a>
```python
import torch
from transformers import AutoModel, AutoTokenizer
vihealthbert = AutoModel.from_pretrained("demdecuong/vihealthbert-base-word")
tokenizer = AutoTokenizer.from_pretrained("demdecuong/vihealthbert-base-word")
# INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
line = "Tôi là sinh_viên trường đại_học Công_nghệ ."
input_ids = torch.tensor([tokenizer.encode(line)])
with torch.no_grad():
features = vihealthbert(input_ids) # Models outputs are now tuples
```
### Example usage for raw text <a name="usage2"></a>
Since ViHealthBERT used the [RDRSegmenter](https://github.com/datquocnguyen/RDRsegmenter) from [VnCoreNLP](https://github.com/vncorenlp/VnCoreNLP) to pre-process the pre-training data.
We highly recommend use the same word-segmenter for ViHealthBERT downstream applications.
#### Installation
```
# Install the vncorenlp python wrapper
pip3 install vncorenlp
# Download VnCoreNLP-1.1.1.jar & its word segmentation component (i.e. RDRSegmenter)
mkdir -p vncorenlp/models/wordsegmenter
wget https://raw.githubusercontent.com/vncorenlp/VnCoreNLP/master/VnCoreNLP-1.1.1.jar
wget https://raw.githubusercontent.com/vncorenlp/VnCoreNLP/master/models/wordsegmenter/vi-vocab
wget https://raw.githubusercontent.com/vncorenlp/VnCoreNLP/master/models/wordsegmenter/wordsegmenter.rdr
mv VnCoreNLP-1.1.1.jar vncorenlp/
mv vi-vocab vncorenlp/models/wordsegmenter/
mv wordsegmenter.rdr vncorenlp/models/wordsegmenter/
```
`VnCoreNLP-1.1.1.jar` (27MB) and folder `models/` must be placed in the same working folder.
#### Example usage
```
# See more details at: https://github.com/vncorenlp/VnCoreNLP
# Load rdrsegmenter from VnCoreNLP
from vncorenlp import VnCoreNLP
rdrsegmenter = VnCoreNLP("/Absolute-path-to/vncorenlp/VnCoreNLP-1.1.1.jar", annotators="wseg", max_heap_size='-Xmx500m')
# Input
text = "Ông Nguyễn Khắc Chúc đang làm việc tại Đại học Quốc gia Hà Nội. Bà Lan, vợ ông Chúc, cũng làm việc tại đây."
# To perform word (and sentence) segmentation
sentences = rdrsegmenter.tokenize(text)
for sentence in sentences:
print(" ".join(sentence))
``` |
mateusqc/ner-bert-base-cased-pt-lenerbr | f404d870be2b50291502aadba3e0d810111f33ba | 2022-04-20T19:48:09.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | mateusqc | null | mateusqc/ner-bert-base-cased-pt-lenerbr | 11 | null | transformers | 11,249 | Entry not found |
brad1141/GPT2_v5 | 93b6430a8512ed2ee3d120cf286b44f31f5fc90c | 2022-04-21T05:44:56.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | brad1141 | null | brad1141/GPT2_v5 | 11 | null | transformers | 11,250 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: GPT2_v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GPT2_v5
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7670
- Precision: 0.7725
- Recall: 0.8367
- F1: 0.4733
- Accuracy: 0.7646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.2212 | 1.0 | 1012 | 0.7874 | 0.7557 | 0.7560 | 0.4041 | 0.7150 |
| 0.7162 | 2.0 | 2024 | 0.7007 | 0.7495 | 0.8714 | 0.4855 | 0.7647 |
| 0.6241 | 3.0 | 3036 | 0.6799 | 0.7681 | 0.8532 | 0.4804 | 0.7702 |
| 0.5545 | 4.0 | 4048 | 0.6997 | 0.7635 | 0.8658 | 0.4814 | 0.7714 |
| 0.4963 | 5.0 | 5060 | 0.7186 | 0.7696 | 0.8470 | 0.4764 | 0.7669 |
| 0.449 | 6.0 | 6072 | 0.7436 | 0.7711 | 0.8382 | 0.4731 | 0.7644 |
| 0.4182 | 7.0 | 7084 | 0.7670 | 0.7725 | 0.8367 | 0.4733 | 0.7646 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Wootang01/gpt-neo-125M-finetuned-hkdse-english-paper4 | 9ab9405ced2bf5c2edf29e086aca0ba61bc48b2a | 2022-04-22T15:26:00.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | false | Wootang01 | null | Wootang01/gpt-neo-125M-finetuned-hkdse-english-paper4 | 11 | 1 | transformers | 11,251 | Entry not found |
PrasunMishra/finetuning-sentiment-model-3000-samples | 1d474973ea5bd6fe8d4c1e2fd3b6315c7db1339f | 2022-04-22T01:20:52.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | PrasunMishra | null | PrasunMishra/finetuning-sentiment-model-3000-samples | 11 | null | transformers | 11,252 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.1
- Datasets 2.1.0
- Tokenizers 0.11.6
|
okho0653/distilbert-base-uncased-zero-shot-sentiment-model | b0f4edb1bc7a4dbed1103dc48245698aaf948a5f | 2022-04-22T01:33:28.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | okho0653 | null | okho0653/distilbert-base-uncased-zero-shot-sentiment-model | 11 | null | transformers | 11,253 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-zero-shot-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-zero-shot-sentiment-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Xibanya/DS9Bot | 5b3310e5f6ca1acc144144597ef272d1476e82cb | 2022-04-24T22:32:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-generation | false | Xibanya | null | Xibanya/DS9Bot | 11 | null | transformers | 11,254 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: ds9_all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ds9_all
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.372e-07
- train_batch_size: 1
- eval_batch_size: 1
- seed: 3138344630
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1261 | 13.0 | 8619 | 3.4600 |
| 1.141 | 14.0 | 9282 | 3.4634 |
| 1.1278 | 15.0 | 9945 | 3.4665 |
| 1.1183 | 16.0 | 10608 | 3.4697 |
| 1.1048 | 17.0 | 11271 | 3.4714 |
| 1.1061 | 18.0 | 11934 | 3.4752 |
| 1.1471 | 19.0 | 12597 | 3.4773 |
| 1.1402 | 20.0 | 13260 | 3.4798 |
| 1.0847 | 21.0 | 13923 | 3.4811 |
| 1.1462 | 22.0 | 14586 | 3.4841 |
| 1.1107 | 23.0 | 15249 | 3.4852 |
| 1.1192 | 24.0 | 15912 | 3.4873 |
| 1.0868 | 25.0 | 16575 | 3.4879 |
| 1.1313 | 26.0 | 17238 | 3.4898 |
| 1.1033 | 27.0 | 17901 | 3.4915 |
| 1.1578 | 28.0 | 18564 | 3.4939 |
| 1.0987 | 29.0 | 19227 | 3.4947 |
| 1.0779 | 30.0 | 19890 | 3.4972 |
| 1.3567 | 61.0 | 20191 | 3.4576 |
| 1.3278 | 62.0 | 20522 | 3.4528 |
| 1.3292 | 63.0 | 20853 | 3.4468 |
| 1.3285 | 64.0 | 21184 | 3.4431 |
| 1.3032 | 65.0 | 21515 | 3.4370 |
| 1.318 | 66.0 | 21846 | 3.4345 |
| 1.3003 | 67.0 | 22177 | 3.4289 |
| 1.3202 | 68.0 | 22508 | 3.4274 |
| 1.2643 | 69.0 | 22839 | 3.4232 |
| 1.2862 | 70.0 | 23170 | 3.4223 |
| 1.2597 | 71.0 | 23501 | 3.4186 |
| 1.2426 | 72.0 | 23832 | 3.4176 |
| 1.2539 | 73.0 | 24163 | 3.4152 |
| 1.2604 | 74.0 | 24494 | 3.4147 |
| 1.263 | 75.0 | 24825 | 3.4128 |
| 1.2642 | 76.0 | 25156 | 3.4127 |
| 1.2694 | 77.0 | 25487 | 3.4109 |
| 1.2251 | 78.0 | 25818 | 3.4106 |
| 1.2673 | 79.0 | 26149 | 3.4097 |
| 1.233 | 80.0 | 26480 | 3.4096 |
| 1.2408 | 81.0 | 26811 | 3.4087 |
| 1.2579 | 82.0 | 27142 | 3.4088 |
| 1.2346 | 83.0 | 27473 | 3.4081 |
| 1.2298 | 84.0 | 27804 | 3.4082 |
| 1.219 | 85.0 | 28135 | 3.4079 |
| 1.2515 | 86.0 | 28466 | 3.4080 |
| 1.2316 | 87.0 | 28797 | 3.4084 |
| 1.2085 | 88.0 | 29128 | 3.4085 |
| 1.2334 | 89.0 | 29459 | 3.4085 |
| 1.2263 | 90.0 | 29790 | 3.4084 |
| 1.2312 | 91.0 | 30121 | 3.4084 |
| 1.2584 | 92.0 | 30452 | 3.4086 |
| 1.2106 | 93.0 | 30783 | 3.4089 |
| 1.2078 | 94.0 | 31114 | 3.4091 |
| 1.2329 | 95.0 | 31445 | 3.4090 |
| 1.1836 | 96.0 | 31776 | 3.4097 |
| 1.2135 | 97.0 | 32107 | 3.4097 |
| 1.2372 | 98.0 | 32438 | 3.4099 |
| 1.2163 | 99.0 | 32769 | 3.4107 |
| 1.1937 | 100.0 | 33100 | 3.4110 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.15.1
- Tokenizers 0.10.3
|
allenai/aspire-contextualsentence-multim-biomed | b7900aff86c9b8b608dfa1989f69fd6489d1903f | 2022-04-24T20:05:33.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2111.08366",
"transformers",
"license:apache-2.0"
]
| feature-extraction | false | allenai | null | allenai/aspire-contextualsentence-multim-biomed | 11 | null | transformers | 11,255 | ---
license: apache-2.0
---
## Overview
Model included in a paper for modeling fine grained similarity between documents:
**Title**: "Multi-Vector Models with Textual Guidance for Fine-Grained Scientific Document Similarity"
**Authors**: Sheshera Mysore, Arman Cohan, Tom Hope
**Paper**: https://arxiv.org/abs/2111.08366
**Github**: https://github.com/allenai/aspire
**Note**: In the context of the paper, this model is referred to as `tsAspire` and represents the papers proposed multi-vector model for fine-grained scientific document similarity.
## Model Card
### Model description
This model is a BERT based multi-vector model trained for fine-grained similarity of biomedical papers. This model inputs the title and abstract of a paper and represents a paper with a contextual sentence vectors obtained by averaging the token representations of individual sentences - the whole title and abstract are encoded with cross-attention in the encoder block before obtaining sentence embeddings. The model is trained by minimizing an Wasserstein/Earth Movers Distance between sentence vectors for a pair of documents - in the process also learning a sparse alignment between sentences in both documents. Test time behavior ranks documents based on the Wasserstein Distance between all sentences of documents or a set of query sentences and a candidate documents sentences.
### Training data
The model is trained on pairs of co-cited papers with their sentences aligned by the co-citation context in a contrastive learning setup. The model is trained on 1.2 million biomedical paper pairs. In training the model, negative examples for the contrastive loss are obtained as random in-batch negatives. Co-citations are obtained from the full text of papers. For example - the papers in brackets below are all co-cited and each pair of papers would be used as a training pair:
> The idea of distant supervision has been proposed and used widely in Relation Extraction (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) , where the source of labels is an external knowledge base.
### Training procedure
The model was trained with the Adam Optimizer and a learning rate of 2e-5 with 1000 warm-up steps followed by linear decay of the learning rate. The model training convergence is checked with the loss on a held out dev set consisting of co-cited paper pairs.
### Intended uses & limitations
This model is trained for fine-grained document similarity tasks in **biomedical** scientific text using multiple vectors per document. The model allows _multiple_ fine grained sentence-to-sentence similarities between documents. The model is well suited to an aspect conditional task formulation where a query might consist of sentence_s_ in a query document and candidates must be retrieved along the specified sentences. Here, the documents are the title and abstract of a paper. With appropriate fine-tuning the model can also be used for other tasks such as document or sentence level classification. Since the training data comes primarily from biomedical, performance on other domains may be poorer.
### How to use
This model can be used via the `transformers` library, and some additional code to compute contextual sentence vectors and to make multiple matches using optimal transport.
View example usage and sample document matches in the model github repo: [`examples/demo-contextualsentence-multim.ipynb`](https://github.com/allenai/aspire/blob/main/examples/demo-contextualsentence-multim.ipynb)
### Variable and metrics
This model is evaluated on information retrieval datasets with document level queries. Here we report performance on RELISH (biomedical/English), and TRECCOVID (biomedical/English). These are detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). These datasets represent a abstract level retrieval task, where given a query scientific abstract the task requires the retrieval of relevant candidate abstracts. In using this model we rank documents by the Wasserstein distance between the query sentences and a candidates sentences.
### Evaluation results
The released model `aspire-contextualsentence-multim-biomed` is compared against `allenai/specter`. `aspire-contextualsentence-multim-biomed`<sup>*</sup> is the performance reported in our paper by averaging over 3 re-runs of the model. The released models `aspire-contextualsentence-multim-biomed` is the single best run among the 3 re-runs.
| | TRECCOVID | TRECCOVID | RELISH | RELISH |
|-------------------------------------------:|:---------:|:-------:|:------:|:-------:|
| | MAP | NDCG%20 | MAP | NDCG%20 |
| `specter` | 28.24 | 59.28 | 60.62 | 77.20 |
| `aspire-contextualsentence-multim-biomed`<sup>*</sup> | 30.92 | 62.23 | 62.57 | 78.95 |
| `aspire-contextualsentence-multim-biomed` | 31.25 | 62.99 | 62.24 | 78.65 |
**Alternative models:**
Besides the above models consider these alternative models also released in the Aspire paper:
[`aspire-contextualsentence-multim-compsci`](https://huggingface.co/allenai/aspire-contextualsentence-multim-compsci): If you wanted to run on computer science papers and want to use a model trained to match _multiple_ sentences between documents.
[`aspire-contextualsentence-singlem-biomed`](https://huggingface.co/allenai/aspire-contextualsentence-singlem-biomed): If you wanted to run on biomedical papers and want to use a model trained to match _single_ sentences between documents.
[`aspire-contextualsentence-singlem-compsci`](https://huggingface.co/allenai/aspire-contextualsentence-singlem-compsci): If you wanted to run on computer science papers and want to use a model trained to match _single_ sentences between documents. |
mrosinski/distilbert-base-uncased-finetuned-emotion | c1feeb777f2aa1094979f7cf5448cbcd8e3b9fab | 2022-07-21T03:22:06.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | mrosinski | null | mrosinski/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,256 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.923306902377617
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2317
- Accuracy: 0.923
- F1: 0.9233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8669 | 1.0 | 250 | 0.3344 | 0.9025 | 0.9004 |
| 0.2607 | 2.0 | 500 | 0.2317 | 0.923 | 0.9233 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
avacaondata/bertin-exist22-task1 | f497100c3c7aad178e992a62322c0217b49c0943 | 2022-04-23T23:28:22.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | avacaondata | null | avacaondata/bertin-exist22-task1 | 11 | null | transformers | 11,257 | Entry not found |
PdF/xlm-roberta-base-finetuned-panx-de | 4c36892395f28edab0d8eadec8762025ddead40f | 2022-04-24T01:31:50.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | PdF | null | PdF/xlm-roberta-base-finetuned-panx-de | 11 | null | transformers | 11,258 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8657802022957154
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1348
- F1: 0.8658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.254 | 1.0 | 525 | 0.1647 | 0.8200 |
| 0.1285 | 2.0 | 1050 | 0.1454 | 0.8443 |
| 0.0808 | 3.0 | 1575 | 0.1348 | 0.8658 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 2.1.0
- Tokenizers 0.10.3
|
IDEA-CCNL/Yuyuan-Bart-400M | c1bdb55f4151278bd236d74ccd6d7d684d5118a7 | 2022-04-24T10:07:05.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"arxiv:2204.03905",
"transformers",
"biobart",
"biomedical",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | IDEA-CCNL | null | IDEA-CCNL/Yuyuan-Bart-400M | 11 | 2 | transformers | 11,259 | ---
language:
- en
license: apache-2.0
tags:
- bart
- biobart
- biomedical
inference: true
widget:
- text: "Influenza is a <mask> disease."
- types: "text-generation"
---
# Yuyuan-Bart-400M, one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
The Yuyuan-Bart-400M is a biomedical generative language model jointly produced by Tsinghua University and International Digital Economy Academy.
Paper: [BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model](https://arxiv.org/pdf/2204.03905.pdf)
## Pretraining Corpora
We use PubMed abstracts as the pretraining corpora. The corpora contain about 41 GB of biomedical research paper abstracts on PubMed.
## Pretraining Setup
We continuously pretrain large versions of BART for 120k steps with a batch size of 2560. We use the same vocabulary as BART to tokenize the texts. Although the input length limitation of BART is 1024, the tokenized PubMed abstracts rarely exceed 512. Therefore, for the sake of training efficiency, we truncate all the input texts to 512 maximum length. We mask 30% of the input tokens and the masked span length is determined by sampling from a Poisson distribution (λ = 3) as used in BART. We use a learning rate scheduler of 0.02 warm-up ratio and linear decay. The learning rate is set to 1e-4. We train the large version of BioBART(400M parameters) on 2 DGX with 16 40GB A100 GPUs for about 168 hours with the help of the open-resource framework DeepSpeed.
## Usage
```python
from transformers import BartForConditionalGeneration, BartTokenizer
tokenizer = BartTokenizer.from_pretrained('IDEA-CCNL/Yuyuan-Bart-400M')
model = BartForConditionalGeneration.from_pretrained('IDEA-CCNL/Yuyuan-Bart-400M')
text = 'Influenza is a <mask> disease.'
input_ids = tokenizer([text], return_tensors="pt")['input_ids']
model.eval()
generated_ids = model.generate(
input_ids=input_ids,
)
preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids]
print(preds)
```
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{BioBART,
title={BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model},
author={Hongyi Yuan and Zheng Yuan and Ruyi Gan and Jiaxing Zhang and Yutao Xie and Sheng Yu},
year={2022},
eprint={2204.03905},
archivePrefix={arXiv}
}
``` |
Hate-speech-CNERG/malayalam-codemixed-abusive-MuRIL | 91522d2bbcbc8d1e786500e37c30b1f44135ee33 | 2022-05-03T08:47:17.000Z | [
"pytorch",
"bert",
"text-classification",
"ma-en",
"arxiv:2204.12543",
"transformers",
"license:afl-3.0"
]
| text-classification | false | Hate-speech-CNERG | null | Hate-speech-CNERG/malayalam-codemixed-abusive-MuRIL | 11 | null | transformers | 11,260 | ---
language: ma-en
license: afl-3.0
---
This model is used to detect **abusive speech** in **Code-Mixed Malayalam**. It is finetuned on MuRIL model using Code-Mixed Malayalam abusive speech dataset.
The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive)
LABEL_0 :-> Normal
LABEL_1 :-> Abusive
### For more details about our paper
Mithun Das, Somnath Banerjee and Animesh Mukherjee. "[Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages](https://arxiv.org/abs/2204.12543)". Accepted at ACM HT 2022.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{das2022data,
title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages},
author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2204.12543},
year={2022}
}
~~~ |
cynthiachan/procedure_classification_distilbert | f5fa41c5b143d745308d66d4eb0167a557cb501b | 2022-04-26T05:42:29.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | cynthiachan | null | cynthiachan/procedure_classification_distilbert | 11 | null | transformers | 11,261 | Entry not found |
gagan3012/ArOCRv3 | 7612e56dc32637b2a8901fd10c485801caccdc06 | 2022-04-27T09:56:43.000Z | [
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"transformers"
]
| null | false | gagan3012 | null | gagan3012/ArOCRv3 | 11 | null | transformers | 11,262 | Entry not found |
manueltonneau/bert-twitter-pt-job-search | ed4429390acf6e83fcdc0fa496c782b1539cf61e | 2022-04-27T10:25:44.000Z | [
"pytorch",
"bert",
"text-classification",
"pt",
"arxiv:2203.09178",
"transformers"
]
| text-classification | false | manueltonneau | null | manueltonneau/bert-twitter-pt-job-search | 11 | null | transformers | 11,263 | ---
language: pt # <-- my language
widget:
- text: "Preciso de um emprego"
---
# Detection of employment status disclosures on Twitter
## Model main characteristics:
- class: Job Search (1), else (0)
- country: BR
- language: Portuguese
- architecture: BERT base
## Model description
This model is a version of `neuralmind/bert-base-portuguese-cased` finetuned to recognize Portuguese tweets mentioning that the user is currently looking for a job. It was trained on Portuguese tweets from users based in Brazil. The task is framed as a binary classification problem with:
- the positive class referring to tweets mentioning that the user is looking for a job (label=1)
- the negative class referring to all other tweets (label=0)
## Resources
The dataset of Portuguese tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment).
Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178).
## Citation
If you find this model useful, please cite our paper (citation to come soon). |
ajtamayoh/bert-finetuned-ADEs_model_1 | d0d7256e39b46017a47e8b7b4f40240511490bcc | 2022-04-27T15:20:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | ajtamayoh | null | ajtamayoh/bert-finetuned-ADEs_model_1 | 11 | null | transformers | 11,264 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ADEs_model_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ADEs_model_1
This model is a fine-tuned version of [jsylee/scibert_scivocab_uncased-finetuned-ner](https://huggingface.co/jsylee/scibert_scivocab_uncased-finetuned-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1938
- Precision: 0.6759
- Recall: 0.6710
- F1: 0.6735
- Accuracy: 0.9132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1987 | 1.0 | 640 | 0.1989 | 0.6618 | 0.6692 | 0.6655 | 0.9116 |
| 0.1954 | 2.0 | 1280 | 0.1953 | 0.6710 | 0.6532 | 0.6620 | 0.9132 |
| 0.1934 | 3.0 | 1920 | 0.1961 | 0.6586 | 0.6823 | 0.6702 | 0.9120 |
| 0.1879 | 4.0 | 2560 | 0.1940 | 0.6727 | 0.6718 | 0.6722 | 0.9133 |
| 0.1897 | 5.0 | 3200 | 0.1938 | 0.6759 | 0.6710 | 0.6735 | 0.9132 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
classla/wav2vec2-xls-r-parlaspeech-hr-lm | 5032954a2c46442d3bcd7aedea30b23829b7cbd7 | 2022-05-18T14:20:40.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hr",
"dataset:parlaspeech-hr",
"transformers",
"audio",
"parlaspeech"
]
| automatic-speech-recognition | false | classla | null | classla/wav2vec2-xls-r-parlaspeech-hr-lm | 11 | null | transformers | 11,265 | ---
language: hr
datasets:
- parlaspeech-hr
tags:
- audio
- automatic-speech-recognition
- parlaspeech
widget:
- example_title: example 1
src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr-lm/raw/main/1800.m4a
- example_title: example 2
src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr-lm/raw/main/00020578b.flac.wav
- example_title: example 3
src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr-lm/raw/main/00020570a.flac.wav
---
# wav2vec2-xls-r-parlaspeech-hr-lm
This model for Croatian ASR is based on the [facebook/wav2vec2-xls-r-300m model](https://huggingface.co/facebook/wav2vec2-xls-r-300m) and was fine-tuned with 300 hours of recordings and transcripts from the ASR Croatian parliament dataset [ParlaSpeech-HR v1.0](http://hdl.handle.net/11356/1494).
If you use this model, please cite the following paper:
Nikola Ljubešić, Danijel Koržinek, Peter Rupnik, Ivo-Pavao Jazbec. ParlaSpeech-HR -- a freely available ASR dataset for Croatian bootstrapped from the ParlaMint corpus. Accepted at ParlaCLARIN@LREC.
## Metrics
Evaluation is performed on the dev and test portions of the [ParlaSpeech-HR v1.0](http://hdl.handle.net/11356/1494) dataset.
|split|CER|WER|
|---|---|---|
|dev|0.0448|0.1129|
|test|0.0363|0.0985|
## Usage in `transformers`
Tested with `transformers==4.18.0`, `torch==1.11.0`, and `SoundFile==0.10.3.post1`.
```python
from transformers import Wav2Vec2ProcessorWithLM, Wav2Vec2ForCTC
import soundfile as sf
import torch
import os
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# load model and tokenizer
processor = Wav2Vec2ProcessorWithLM.from_pretrained(
"classla/wav2vec2-xls-r-parlaspeech-hr-lm")
model = Wav2Vec2ForCTC.from_pretrained("classla/wav2vec2-xls-r-parlaspeech-hr-lm")
# download the example wav files:
os.system("wget https://huggingface.co/classla/wav2vec2-large-slavic-parlaspeech-hr/raw/main/00020570a.flac.wav")
# read the wav file
speech, sample_rate = sf.read("00020570a.flac.wav")
input_values = processor(speech, sampling_rate=sample_rate, return_tensors="pt").input_values.cuda()
inputs = processor(speech, sampling_rate=sample_rate, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
transcription = processor.batch_decode(logits.numpy()).text[0]
# remove the raw wav file
os.system("rm 00020570a.flac.wav")
transcription
# transcription: 'velik broj poslovnih subjekata posluje sa minusom velik dio'
```
## Training hyperparameters
In fine-tuning, the following arguments were used:
| arg | value |
|-------------------------------|-------|
| `per_device_train_batch_size` | 16 |
| `gradient_accumulation_steps` | 4 |
| `num_train_epochs` | 8 |
| `learning_rate` | 3e-4 |
| `warmup_steps` | 500 | |
schnell/wakaformer | 07cc721f4ae1bf5256117f79f8fcddf45cb54c9c | 2022-04-29T15:18:00.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | schnell | null | schnell/wakaformer | 11 | 0 | transformers | 11,266 | ---
license: apache-2.0
---
|
cfilt/HiNER-collapsed-muril-base-cased | b88260cea1f9cb60dfc73581d041b1cc6e6f4486 | 2022-05-01T19:48:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:cfilt/HiNER-collapsed",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | cfilt | null | cfilt/HiNER-collapsed-muril-base-cased | 11 | null | transformers | 11,267 | ---
tags:
- generated_from_trainer
datasets:
- cfilt/HiNER-collapsed
metrics:
- precision
- recall
- f1
model-index:
- name: HiNER-collapsed-muril-base-cased
results:
- task:
name: Token Classification
type: token-classification
dataset:
type: cfilt/HiNER-collapsed
name: HiNER Collapsed
metrics:
- name: Precision
type: precision
value: 0.9049101352603298
- name: Recall
type: recall
value: 0.9209156735555891
- name: F1
type: f1
value: 0.9128427506027924
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HiNER-collapsed-muril-base-cased
This model was trained from scratch on the cfilt/HiNER-collapsed dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.14.0
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
ChrisZeng/t5-base-detox | 03ecfe96e3a425dc8227be124f492c16271ef0d8 | 2022-04-30T21:53:04.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | ChrisZeng | null | ChrisZeng/t5-base-detox | 11 | null | transformers | 11,268 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-detox
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-detox
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.337 | 1.0 | 135 | 0.4810 |
| 0.5238 | 2.0 | 270 | 0.3886 |
| 0.4301 | 3.0 | 405 | 0.3378 |
| 0.3755 | 4.0 | 540 | 0.3122 |
| 0.3359 | 5.0 | 675 | 0.2910 |
| 0.3068 | 6.0 | 810 | 0.2737 |
| 0.2861 | 7.0 | 945 | 0.2710 |
| 0.2744 | 8.0 | 1080 | 0.2617 |
| 0.2649 | 9.0 | 1215 | 0.2630 |
| 0.2585 | 10.0 | 1350 | 0.2615 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.12.0.dev20220429
- Datasets 2.1.0
- Tokenizers 0.10.3
|
hf-internal-testing/wav2vec2-conformer-xvector | 4096302af149a7fd6384a9b489e69588017ee245 | 2022-05-01T16:03:28.000Z | [
"pytorch",
"wav2vec2-conformer",
"audio-xvector",
"transformers"
]
| null | false | hf-internal-testing | null | hf-internal-testing/wav2vec2-conformer-xvector | 11 | null | transformers | 11,269 | Entry not found |
Preetiha/clause_classification | bef152c6d44e2e2a87dd6cf70c2d571df021a322 | 2022-05-02T00:07:12.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:Preetiha/autotrain-data-clause-classification",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | Preetiha | null | Preetiha/clause_classification | 11 | 1 | transformers | 11,270 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Preetiha/autotrain-data-clause-classification
co2_eq_emissions: 44.494127975699804
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 812025458
- CO2 Emissions (in grams): 44.494127975699804
## Validation Metrics
- Loss: 0.5240132808685303
- Accuracy: 0.8673
- Macro F1: 0.7979496833221609
- Micro F1: 0.8673
- Weighted F1: 0.8616433030199793
- Macro Precision: 0.8263528446923423
- Micro Precision: 0.8673
- Weighted Precision: 0.8702574307362431
- Macro Recall: 0.7953048612545152
- Micro Recall: 0.8673
- Weighted Recall: 0.8673
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Preetiha/autotrain-clause-classification-812025458
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Preetiha/autotrain-clause-classification-812025458", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Preetiha/autotrain-clause-classification-812025458", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
alla1101/distilbert-base-uncased-finetuned-emotion | e19f6592135c7851815fd0e28447487bf404d3f6 | 2022-05-03T08:11:40.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | alla1101 | null | alla1101/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,271 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240869504197766
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2236
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3293 | 0.901 | 0.8979 |
| No log | 2.0 | 500 | 0.2236 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
0x7194633/BulgakovLM-tur | 27434232933ace59d1e03dc9896c6938f782a19c | 2022-05-03T08:56:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | 0x7194633 | null | 0x7194633/BulgakovLM-tur | 11 | null | transformers | 11,272 | Entry not found |
iis2009002/distilbert-base-uncased-finetuned-emotion | 7d2aaf6b3e4a957540e87728a43ff7852ad1b402 | 2022-05-04T07:49:35.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | iis2009002 | null | iis2009002/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,273 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.925904463781861
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2133
- Accuracy: 0.926
- F1: 0.9259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.827 | 1.0 | 250 | 0.3060 | 0.9075 | 0.9044 |
| 0.2452 | 2.0 | 500 | 0.2133 | 0.926 | 0.9259 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
varsha12/BERT_DNRTI | fdd9ff8dd442d04205a14f745be8b6e086e1721b | 2022-06-28T17:26:26.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
]
| token-classification | false | varsha12 | null | varsha12/BERT_DNRTI | 11 | null | transformers | 11,274 | ---
license: afl-3.0
---
|
vumichien/wav2vec2-xls-r-300m-japanese-large-ver2 | 74f6be34415dca7aa9b1ba7e41f5d44fdc06fe5d | 2022-05-17T10:41:08.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | vumichien | null | vumichien/wav2vec2-xls-r-300m-japanese-large-ver2 | 11 | null | transformers | 11,275 | Entry not found |
vuiseng9/bert-l-squadv1.1-sl384 | 8177c08284262c5cdae638fa035eb40783596b97 | 2022-05-07T00:15:48.000Z | [
"pytorch",
"tf",
"jax",
"onnx",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | vuiseng9 | null | vuiseng9/bert-l-squadv1.1-sl384 | 11 | null | transformers | 11,276 | ---
license: apache-2.0
datasets:
- squad
model-index:
- name: bert-l-squadv1.1-sl384
results: []
---
This model is a fork of [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad).
ONNX and OpenVINO-IR models are enclosed.
### Evaluation
evaluated in ```v4.9.2```.
```
eval_exact_match = 86.9253
eval_f1 = 93.1563
eval_samples = 10784
``` |
hidude562/Wiki-Complexity | d511307f0a832e9b586b69c4e5a9c4149a112708 | 2022-05-08T15:11:01.000Z | [
"pytorch",
"jax",
"distilbert",
"text-classification",
"en",
"dataset:hidude562/autotrain-data-SimpleDetect",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | hidude562 | null | hidude562/Wiki-Complexity | 11 | null | transformers | 11,277 | ---
tags: autotrain
language: en
widget:
- text: "I quite enjoy using AutoTrain due to its simplicity."
datasets:
- hidude562/autotrain-data-SimpleDetect
co2_eq_emissions: 0.21691606119445225
---
# Model Description
This model detects if you are writing in a format that is more similar to Simple English Wikipedia or English Wikipedia. This can be extended to applications that aren't Wikipedia as well and to some extent, it can be used for other languages.
Please also note there is a major bias to special characters (Mainly the hyphen mark, but it also applies to others) so I would recommend removing them from your input text.
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 837726721
- CO2 Emissions (in grams): 0.21691606119445225
## Validation Metrics
- Loss: 0.010096958838403225
- Accuracy: 0.996223414828066
- Macro F1: 0.996179398826373
- Micro F1: 0.996223414828066
- Weighted F1: 0.996223414828066
- Macro Precision: 0.996179398826373
- Micro Precision: 0.996223414828066
- Weighted Precision: 0.996223414828066
- Macro Recall: 0.996179398826373
- Micro Recall: 0.996223414828066
- Weighted Recall: 0.996223414828066
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I quite enjoy using AutoTrain due to its simplicity."}' https://api-inference.huggingface.co/models/hidude562/Wiki-Complexity
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("hidude562/Wiki-Complexity", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("hidude562/Wiki-Complexity", use_auth_token=True)
inputs = tokenizer("I quite enjoy using AutoTrain due to its simplicity.", return_tensors="pt")
outputs = model(**inputs)
``` |
Jeevesh8/bert_ft_qqp-26 | 5fd9081fcd8511de41b2ba425884d4e90921106b | 2022-05-09T10:36:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-26 | 11 | null | transformers | 11,278 | Entry not found |
Jeevesh8/bert_ft_qqp-99 | e429b7cd0c1dd70376f8dc03dfa8b773fda50395 | 2022-05-09T13:43:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-99 | 11 | null | transformers | 11,279 | Entry not found |
tomhosking/deberta-v3-base-debiased-nli | 2d83b2709292f3d9cca8c35d43137f9b17750753 | 2022-05-10T08:15:40.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"license:apache-2.0"
]
| text-classification | false | tomhosking | null | tomhosking/deberta-v3-base-debiased-nli | 11 | null | transformers | 11,280 | ---
license: apache-2.0
widget:
- text: "[CLS] Rover is a dog. [SEP] Rover is a cat. [SEP]"
---
`deberta-v3-base`, fine tuned on the debiased NLI dataset from "Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets", Wu et al., 2022.
Tuned using the code at https://github.com/jimmycode/gen-debiased-nli
|
CEBaB/gpt2.CEBaB.sa.3-class.exclusive.seed_42 | 4b070292a79109ada92daf31c0747a6172cb2fd4 | 2022-05-10T23:49:46.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| null | false | CEBaB | null | CEBaB/gpt2.CEBaB.sa.3-class.exclusive.seed_42 | 11 | null | transformers | 11,281 | Entry not found |
CEBaB/gpt2.CEBaB.sa.2-class.exclusive.seed_66 | 7b0e049d8a8856580ed00dd480f7d46532c8acb8 | 2022-05-11T00:24:10.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| null | false | CEBaB | null | CEBaB/gpt2.CEBaB.sa.2-class.exclusive.seed_66 | 11 | null | transformers | 11,282 | Entry not found |
CEBaB/gpt2.CEBaB.sa.3-class.exclusive.seed_66 | c1063edfd01082a9856ee9698a3b7c8574bd26fa | 2022-05-11T00:41:15.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| null | false | CEBaB | null | CEBaB/gpt2.CEBaB.sa.3-class.exclusive.seed_66 | 11 | null | transformers | 11,283 | Entry not found |
CEBaB/gpt2.CEBaB.sa.2-class.exclusive.seed_77 | 97fb2f754530aaf0b7b035ac3b9e05e1b233cb77 | 2022-05-11T01:16:26.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| null | false | CEBaB | null | CEBaB/gpt2.CEBaB.sa.2-class.exclusive.seed_77 | 11 | null | transformers | 11,284 | Entry not found |
CEBaB/gpt2.CEBaB.sa.2-class.exclusive.seed_88 | f0bf69b8bb96d123a0b259678962ffc9af3bb290 | 2022-05-11T02:08:44.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| null | false | CEBaB | null | CEBaB/gpt2.CEBaB.sa.2-class.exclusive.seed_88 | 11 | null | transformers | 11,285 | Entry not found |
CEBaB/gpt2.CEBaB.sa.3-class.exclusive.seed_88 | fb2abe7a7d3198f7c1c5e4691d191c4cf5b61c18 | 2022-05-11T02:25:44.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| null | false | CEBaB | null | CEBaB/gpt2.CEBaB.sa.3-class.exclusive.seed_88 | 11 | null | transformers | 11,286 | Entry not found |
CEBaB/gpt2.CEBaB.sa.5-class.exclusive.seed_88 | c15391964ea7e04635988233f98701c06ee39415 | 2022-05-11T02:42:44.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| null | false | CEBaB | null | CEBaB/gpt2.CEBaB.sa.5-class.exclusive.seed_88 | 11 | null | transformers | 11,287 | Entry not found |
CEBaB/gpt2.CEBaB.sa.2-class.exclusive.seed_99 | 9ba9e845e50ecc41762ba9bc5215d7b5e6f389ba | 2022-05-11T03:00:02.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| null | false | CEBaB | null | CEBaB/gpt2.CEBaB.sa.2-class.exclusive.seed_99 | 11 | null | transformers | 11,288 | Entry not found |
CEBaB/gpt2.CEBaB.sa.3-class.exclusive.seed_99 | f9aa79a382a34016fc1c5bf6bccd4971bc33a19b | 2022-05-11T03:17:19.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| null | false | CEBaB | null | CEBaB/gpt2.CEBaB.sa.3-class.exclusive.seed_99 | 11 | null | transformers | 11,289 | Entry not found |
CEBaB/gpt2.CEBaB.sa.5-class.exclusive.seed_99 | 51cae8e619ab8bcb01a3f1238da48d9849a7fc76 | 2022-05-11T03:34:36.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| null | false | CEBaB | null | CEBaB/gpt2.CEBaB.sa.5-class.exclusive.seed_99 | 11 | null | transformers | 11,290 | Entry not found |
AndyGo/speechbrain-asr-crdnn-rnnlm-buriy-audiobooks-2-val | a0dbdbfcdc6df87094a1cc83770664da9bfeec58 | 2022-05-19T14:53:31.000Z | [
"ru",
"dataset:buriy-audiobooks-2-val",
"arxiv:2106.04624",
"speechbrain",
"automatic-speech-recognition",
"CTC",
"Attention",
"pytorch",
"license:apache-2.0"
]
| automatic-speech-recognition | false | AndyGo | null | AndyGo/speechbrain-asr-crdnn-rnnlm-buriy-audiobooks-2-val | 11 | null | speechbrain | 11,291 | ---
language: "ru"
thumbnail:
tags:
- automatic-speech-recognition
- CTC
- Attention
- pytorch
- speechbrain
license: "apache-2.0"
datasets:
- buriy-audiobooks-2-val
metrics:
- wer
- cer
---
| Release | Test WER | GPUs |
|:-------------:|:--------------:| :--------:|
| 22-05-11 | - | 1xK80 24GB |
after 9 epochs training - valid %WER: 4.09e+02
after 12 epochs training - valid %WER: 2.07e+02, test WER: 1.78e+02
## Pipeline description
(by SpeechBrain text)
This ASR system is composed with 3 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions of LibriSpeech.
- Neural language model (RNNLM) trained on the full (380K) words dataset.
- Acoustic model (CRDNN + CTC/Attention). The CRDNN architecture is made of
N blocks of convolutional neural networks with normalisation and pooling on the
frequency domain. Then, a bidirectional LSTM is connected to a final DNN to obtain
the final acoustic representation that is given to the CTC and attention decoders.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that SpeechBrain encourage you to read tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in Russian)
```python
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="AndyGo/speechbrain-asr-crdnn-rnnlm-buriy-audiobooks-2-val", savedir="pretrained_models/speech-brain-asr-crdnn-rnnlm-buriy-audiobooks-2-val")
asr_model.transcribe_file('speechbrain-asr-crdnn-rnnlm-buriy-audiobooks-2-val/example.wav')
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Russian Speech Datasets
Russian Speech Datasets are provided by Microsoft Corporation with CC BY-NC license.
Instructions by downloading - https://github.com/snakers4/open_stt
The CC BY-NC license requires that the original copyright owner be listed as the author and the work be used only for non-commercial purposes
We used buriy-audiobooks-2-val dataset
## About SpeechBrain
Website: https://speechbrain.github.io/
Code: https://github.com/speechbrain/speechbrain/
HuggingFace: https://huggingface.co/speechbrain/
## Citing SpeechBrain
Please, cite SpeechBrain if you use it for your research or business.
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
} |
sismetanin/rubert-rusentitweet | 3d0096723baa6adbe1052ce4277bcc6982dfdddd | 2022-05-12T20:53:24.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | sismetanin | null | sismetanin/rubert-rusentitweet | 11 | null | transformers | 11,292 | precision recall f1-score support
negative 0.681957 0.675758 0.678843 660
neutral 0.707845 0.735019 0.721176 1068
positive 0.596591 0.652174 0.623145 483
skip 0.583062 0.485095 0.529586 369
speech 0.827160 0.676768 0.744444 99
accuracy 0.668906 2679
macro avg 0.679323 0.644963 0.659439 2679
w avg 0.668631 0.668906 0.667543 2679
3 Runs:
Avg macro Precision 0.6747772329026972
Avg macro Recall 0.6436866944877477
Avg macro F1 0.654867154097531
Avg weighted F1 0.6649503767906553 |
Ninh/distilbert-base-uncased-finetuned-emotion | d8843939d3478c5a8a56e4fc63b9b260d5107e53 | 2022-05-13T02:41:30.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Ninh | null | Ninh/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,293 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9241543444176422
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2144
- Accuracy: 0.924
- F1: 0.9242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8028 | 1.0 | 250 | 0.3015 | 0.91 | 0.9089 |
| 0.2382 | 2.0 | 500 | 0.2144 | 0.924 | 0.9242 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
maazmikail/finetuning-sentiment-model-urdu-roberta | f558ad46571eac72b6cb0c436f4290b9f3744007 | 2022-05-16T19:01:35.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | maazmikail | null | maazmikail/finetuning-sentiment-model-urdu-roberta | 11 | null | transformers | 11,294 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-urdu-roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-urdu-roberta
This model is a fine-tuned version of [urduhack/roberta-urdu-small](https://huggingface.co/urduhack/roberta-urdu-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
W42/distilbert-base-uncased-finetuned-emotion | 03eece33d522ae6094b10f1d231a3633abc58eb7 | 2022-05-16T15:20:30.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | W42 | null | W42/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,295 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9271021143652434
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2158
- Accuracy: 0.927
- F1: 0.9271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8302 | 1.0 | 250 | 0.3104 | 0.905 | 0.9032 |
| 0.2499 | 2.0 | 500 | 0.2158 | 0.927 | 0.9271 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
CEBaB/bert-base-uncased.CEBaB.absa.exclusive.seed_66 | 806e243a2f86975faef20b95e8c73556e4f9705c | 2022-05-17T18:48:27.000Z | [
"pytorch",
"bert",
"transformers"
]
| null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.absa.exclusive.seed_66 | 11 | null | transformers | 11,296 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.absa.exclusive.seed_77 | 333e40b6570ed7666306339f8914d1bfac4f5ae5 | 2022-05-17T18:53:17.000Z | [
"pytorch",
"bert",
"transformers"
]
| null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.absa.exclusive.seed_77 | 11 | null | transformers | 11,297 | Entry not found |
imohammad12/GRS-Constrained-Paraphrasing-Bart | 28ca61cc4b875b83b0c0fbaa6d84e00b7b7d5d75 | 2022-05-26T10:49:26.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"transformers",
"grs",
"autotrain_compatible"
]
| text2text-generation | false | imohammad12 | null | imohammad12/GRS-Constrained-Paraphrasing-Bart | 11 | null | transformers | 11,298 | ---
language: en
tags: grs
---
## Citation
Please star the [GRS GitHub repo](https://github.com/imohammad12/GRS) and cite the paper if you found our model useful:
```
@inproceedings{dehghan-etal-2022-grs,
title = "{GRS}: Combining Generation and Revision in Unsupervised Sentence Simplification",
author = "Dehghan, Mohammad and
Kumar, Dhruv and
Golab, Lukasz",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.77",
pages = "949--960",
abstract = "We propose GRS: an unsupervised approach to sentence simplification that combines text generation and text revision. We start with an iterative framework in which an input sentence is revised using explicit edit operations, and add paraphrasing as a new edit operation. This allows us to combine the advantages of generative and revision-based approaches: paraphrasing captures complex edit operations, and the use of explicit edit operations in an iterative manner provides controllability and interpretability. We demonstrate these advantages of GRS compared to existing methods on the Newsela and ASSET datasets.",
}
``` |
elvaklose/finetuning-sentiment-model-3000-samples | b0677146365d9a2022231bf7bcd8bb68e5f768b7 | 2022-05-20T05:51:48.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | elvaklose | null | elvaklose/finetuning-sentiment-model-3000-samples | 11 | null | transformers | 11,299 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.8786885245901639
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2896
- Accuracy: 0.8767
- F1: 0.8787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.