modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
benjamin/roberta-base-wechsel-ukrainian | 6efef251e8955956c3086c0ef58001065c9b1800 | 2022-07-13T23:43:28.000Z | [
"pytorch",
"roberta",
"fill-mask",
"uk",
"transformers",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | benjamin | null | benjamin/roberta-base-wechsel-ukrainian | 10 | null | transformers | 11,800 | ---
license: mit
language: uk
---
# roberta-base-wechsel-ukrainian
[`roberta-base`](https://huggingface.co/roberta-base) transferred to Ukrainian using the method from the NAACL2022 paper [WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models](https://aclanthology.org/2022.naacl-main.293/).
# Evaluation
Evaluation was done on [lang-uk's ner-uk project](https://github.com/lang-uk/ner-uk), the Ukrainian portion of [WikiANN](https://huggingface.co/datasets/wikiann) and the [Ukrainian IU corpus from the Universal Dependencies project](https://github.com/UniversalDependencies/UD_Ukrainian-IU). Evaluation results are the mean of 5 runs with different seeds.
__Validation Results__
| | lang-uk NER (Micro F1) | WikiANN (Micro F1) | UD Ukrainian IU POS (Accuracy) |
|:-------------------------------------------------|:-------------------------|:-------------|:-------------------------|
| roberta-base-wechsel-ukrainian | 88.06 (0.50) | 92.96 (0.08) | 98.70 (0.05) |
| roberta-large-wechsel-ukrainian | __89.27 (0.53)__ | __93.22 (0.15)__ | __98.86 (0.03)__ |
|
| roberta-base-scratch-ukrainian* | 85.49 (0.88) | 91.91 (0.08) | 98.49 (0.04) |
| roberta-large-scratch-ukrainian* | 86.54 (0.70) | 92.39 (0.16) | 98.65 (0.09) |
|
| dbmdz/electra-base-ukrainian-cased-discriminator | 87.49 (0.52) | 93.20 (0.16) | 98.60 (0.03) |
| xlm-roberta-base | 86.68 (0.44) | 92.41 (0.13) | 98.53 (0.02) |
| xlm-roberta-large | 86.64 (1.61) | 93.01 (0.13) | 98.71 (0.04) |
__Test Results__
| | lang-uk NER (Micro F1) | WikiANN (Micro F1) | UD Ukrainian IU POS (Accuracy) |
|:-------------------------------------------------|:-------------------------|:-------------|:-------------------------|
| roberta-base-wechsel-ukrainian | 90.81 (1.51) | 92.98 (0.12) | 98.57 (0.03) |
| roberta-large-wechsel-ukrainian | __91.24 (1.16)__ | __93.22 (0.17)__ | __98.74 (0.06)__ |
|
| roberta-base-scratch-ukrainian* | 89.57 (1.01) | 92.05 (0.09) | 98.31 (0.08) |
| roberta-large-scratch-ukrainian* | 89.96 (0.89) | 92.49 (0.15) | 98.52 (0.04) |
|
| dbmdz/electra-base-ukrainian-cased-discriminator | 90.43 (1.29) | 92.99 (0.11) | 98.59 (0.06) |
| xlm-roberta-base | 90.86 (0.81) | 92.27 (0.09) | 98.45 (0.07) |
| xlm-roberta-large | 90.16 (2.98) | 92.92 (0.19) | 98.71 (0.04) |
\*trained using the same exact training setup as the wechsel-\* models, but without parameter transfer from WECHSEL.
# License
MIT |
hackathon-pln-es/unam_tesis_BETO_finnetuning | b2c791544cca370b45c34fe6600a506116ac1be6 | 2022-04-13T02:16:03.000Z | [
"pytorch",
"dataset:unam_tesis",
"transformers",
"text-classification",
"license:apache-2.0"
]
| text-classification | false | hackathon-pln-es | null | hackathon-pln-es/unam_tesis_BETO_finnetuning | 10 | 5 | transformers | 11,801 | ---
annotations_creators:
- inoid
- MajorIsaiah
- Ximyer
- clavel
tags:
- "transformers"
- "text-classification"
languages: "es"
license: "apache-2.0"
datasets: "unam_tesis"
metrics: "accuracy"
widget:
- text: "Introducción al análisis de riesgos competitivos bajo el enfoque de la función de incidencia acumulada (FIA) y su aplicación con R"
- text: "Asociación del polimorfismo rs1256031 del receptor beta de estrógenos en pacientes con diabetes tipo 2"
---
# Unam_tesis_beto_finnetuning: Unam's thesis classification with BETO
This model is created from the finetuning of the pre-model
for Spanish [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased), using PyTorch framework,
and trained with a set of theses of the National Autonomous University of Mexico [(UNAM)](https://tesiunam.dgb.unam.mx/F?func=find-b-0&local_base=TES01).
The model classifies a text into for five (Psicología, Derecho, Química Farmacéutico Biológica, Actuaría, Economía)
possible careers at the UNAM.
## Training Dataset
1000 documents (Thesis introduction, Author´s first name, Author´s last name, Thesis title, Year, Career)
| Careers | Size |
|--------------|----------------------|
| Actuaría | 200 |
| Derecho| 200 |
| Economía| 200 |
| Psicología| 200 |
| Química Farmacéutico Biológica| 200 |
## Example of use
For further details on how to use unam_tesis_BETO_finnetuning you can visit the Hugging Face Transformers library, starting with the Quickstart section. The UNAM tesis model can be accessed simply as 'hackathon-pln-e/unam_tesis_BETO_finnetuning' by using the Transformers library. An example of how to download and use the model can be found next.
```python
tokenizer = AutoTokenizer.from_pretrained('hiiamsid/BETO_es_binary_classification', use_fast=False)
model = AutoModelForSequenceClassification.from_pretrained(
'hackathon-pln-es/unam_tesis_BETO_finnetuning', num_labels=5, output_attentions=False,
output_hidden_states=False)
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True)
classificationResult = pipe("Análisis de las condiciones del aprendizaje desde casa en los alumnos de preescolar y primaria del municipio de Nicolás Romero")
```
## Citation
To cite this resource in a publication please use the following:
[UNAM's Tesis with BETO finetuning classify] (https://huggingface.co/hackathon-pln-es/unam_tesis_BETO_finnetuning)
To cite this resource in a publication please use the following:
```
@inproceedings{SpanishNLPHackaton2022,
title={UNAM's Theses with BETO fine-tuning classify},
author={López López, Isaac Isaías; Clavel Quintero, Yisel; López Ramos, Dionis & López López, Ximena Yeraldin},
booktitle={Somos NLP Hackaton 2022},
year={2022}
}
```
## Team members
- Isaac Isaías López López ([MajorIsaiah](https://huggingface.co/MajorIsaiah))
- Dionis López Ramos ([inoid](https://huggingface.co/inoid))
- Yisel Clavel Quintero ([clavel](https://huggingface.co/clavel))
- Ximena Yeraldin López López ([Ximyer](https://huggingface.co/Ximyer)) |
LeBenchmark/wav2vec-FR-1K-Male-base | bc32576a8dd2bba1d276c9313836ba5b01865bda | 2022-05-11T09:23:04.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"fr",
"arxiv:2204.01397",
"transformers",
"license:apache-2.0"
]
| null | false | LeBenchmark | null | LeBenchmark/wav2vec-FR-1K-Male-base | 10 | null | transformers | 11,802 | ---
language: "fr"
thumbnail:
tags:
- wav2vec2
license: "apache-2.0"
---
# LeBenchmark: wav2vec2 base model trained on 1K hours of French *female-only* speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech.
For more information about our gender study for SSL moddels, please refer to our paper at: [A Study of Gender Impact in Self-supervised Models for Speech-to-Text Systems](https://arxiv.org/abs/2204.01397)
## Model and data descriptions
We release four gender-specific models trained on 1K hours of speech.
- [wav2vec2-FR-1K-Male-large](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Male-large/)
- [wav2vec2-FR-1k-Male-base](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Male-base/)
- [wav2vec2-FR-1K-Female-large](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Female-large/)
- [wav2vec2-FR-1K-Female-base](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Female-base/)
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Referencing our gender-specific models
```
@article{boito2022study,
title={A Study of Gender Impact in Self-supervised Models for Speech-to-Text Systems},
author={Marcely Zanon Boito and Laurent Besacier and Natalia Tomashenko and Yannick Est{\`e}ve},
journal={arXiv preprint arXiv:2204.01397},
year={2022}
}
```
## Referencing LeBenchmark
```
@inproceedings{evain2021task,
title={Task agnostic and task specific self-supervised learning from speech with \textit{LeBenchmark}},
author={Evain, Sol{\`e}ne and Nguyen, Ha and Le, Hang and Boito, Marcely Zanon and Mdhaffar, Salima and Alisamir, Sina and Tong, Ziyi and Tomashenko, Natalia and Dinarelli, Marco and Parcollet, Titouan and others},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021}
}
``` |
bhavitvyamalik/fake-news_xtremedistil-l6-h256-uncased | 130d1edcdbd53cddbad02ee281c52b4bed71bedd | 2022-04-18T20:23:38.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:mit"
]
| text-classification | false | bhavitvyamalik | null | bhavitvyamalik/fake-news_xtremedistil-l6-h256-uncased | 10 | null | transformers | 11,803 | ---
license: mit
---
### Dataset used
[Fake and real news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset)
### Labels
Fake news: 1 </br>
Real news: 0
### Usage
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoConfig
import torch
config = AutoConfig.from_pretrained("bhavitvyamalik/fake-news_xtremedistil-l6-h256-uncased")
model = AutoModelForSequenceClassification.from_pretrained("bhavitvyamalik/fake-news_xtremedistil-l6-h256-uncased", config=config)
tokenizer = AutoTokenizer.from_pretrained("microsoft/xtremedistil-l6-h256-uncased", usefast=True)
text = "According to reports by Fox News, Biden is the President of the USA"
encode = tokenizer(text, max_length=512, truncation=True, padding="max_length", return_tensors="pt")
output = model(**encode)
print(torch.argmax(output["logits"]))
```
### Performance on test data
```json
'test/accuracy': 0.9977836608886719,
'test/aucroc': 0.9999998807907104,
'test/f1': 0.9976308941841125,
'test/loss': 0.00828308891505003
```
### Run can be tracked here
[Wandb project for Fake news classifier](https://wandb.ai/bhavitvya/Fake%20news%20classifier?workspace=user-bhavitvya) |
MohitSingh/wikineural-multilingual-ner | 172339d78f5916850e7ed457f2a66b52d66af105 | 2022-04-04T15:59:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | MohitSingh | null | MohitSingh/wikineural-multilingual-ner | 10 | null | transformers | 11,804 | Entry not found |
Vinspatel4/wikineural-multilingual-ner | a948a1030575d29ae650fd6722f68efebe439df0 | 2022-04-10T07:49:33.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Vinspatel4 | null | Vinspatel4/wikineural-multilingual-ner | 10 | null | transformers | 11,805 | Entry not found |
HenryHXR/scibert_scivocab_uncased_epoch20-finetuned-ner | 492dd152f5bbed239ec44538d0d408219d90dd00 | 2022-04-05T15:51:56.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | HenryHXR | null | HenryHXR/scibert_scivocab_uncased_epoch20-finetuned-ner | 10 | null | transformers | 11,806 | ---
tags:
- generated_from_trainer
model-index:
- name: scibert_scivocab_uncased_epoch20-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert_scivocab_uncased_epoch20-finetuned-ner
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
nielsr/segformer-test-sidewalk-v2 | 4f04f05e8c70ab3705dcd3318c4512a519a8c66f | 2022-04-06T13:11:06.000Z | [
"pytorch",
"segformer",
"dataset:segments/sidewalk-semantic",
"transformers",
"vision",
"image-segmentation",
"license:apache-2.0"
]
| image-segmentation | false | nielsr | null | nielsr/segformer-test-sidewalk-v2 | 10 | null | transformers | 11,807 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
datasets:
- segments/sidewalk-semantic
widget:
- src: https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg
example_title: Brugge
--- |
ashwathgojo234/wikineural-multilingual-ner | 4d861c0bbc27f7bd68ab793cb1ade570edb14fb5 | 2022-04-11T12:39:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ashwathgojo234 | null | ashwathgojo234/wikineural-multilingual-ner | 10 | null | transformers | 11,808 | Entry not found |
ChrisZeng/bertweet-base-cased-covid19-hateval | 9c47b1e32741597805fb39de82633228de234349 | 2022-04-06T23:04:39.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | ChrisZeng | null | ChrisZeng/bertweet-base-cased-covid19-hateval | 10 | null | transformers | 11,809 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bertweet-base-cased-covid19-hateval
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-base-cased-covid19-hateval
This model is a fine-tuned version of [vinai/bertweet-covid19-base-cased](https://huggingface.co/vinai/bertweet-covid19-base-cased) on the HatEval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4817
- Accuracy: 0.773
- F1: 0.7722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:------:|:---------------:|
| 0.6925 | 0.99 | 70 | 0.573 | 0.3643 | 0.6827 |
| 0.6823 | 1.99 | 140 | 0.573 | 0.3643 | 0.6736 |
| 0.6713 | 2.99 | 210 | 0.587 | 0.3993 | 0.6568 |
| 0.6468 | 3.99 | 280 | 0.7 | 0.6708 | 0.6210 |
| 0.6047 | 4.99 | 350 | 0.732 | 0.7286 | 0.5785 |
| 0.5648 | 5.99 | 420 | 0.733 | 0.7319 | 0.5537 |
| 0.536 | 6.99 | 490 | 0.739 | 0.7381 | 0.5406 |
| 0.5175 | 7.99 | 560 | 0.744 | 0.7431 | 0.5308 |
| 0.5018 | 8.99 | 630 | 0.751 | 0.7504 | 0.5235 |
| 0.4874 | 9.99 | 700 | 0.749 | 0.7479 | 0.5145 |
| 0.4749 | 10.99 | 770 | 0.754 | 0.7533 | 0.5104 |
| 0.4666 | 11.99 | 840 | 0.761 | 0.7605 | 0.5052 |
| 0.456 | 12.99 | 910 | 0.761 | 0.7604 | 0.5017 |
| 0.4489 | 13.99 | 980 | 0.764 | 0.7635 | 0.4986 |
| 0.4375 | 14.99 | 1050 | 0.764 | 0.7625 | 0.4932 |
| 0.4319 | 15.99 | 1120 | 0.762 | 0.7608 | 0.4917 |
| 0.427 | 16.99 | 1190 | 0.77 | 0.7693 | 0.4918 |
| 0.4226 | 17.99 | 1260 | 0.772 | 0.7711 | 0.4889 |
| 0.4167 | 18.99 | 1330 | 0.769 | 0.7681 | 0.4874 |
| 0.4127 | 19.99 | 1400 | 0.768 | 0.7673 | 0.4868 |
| 0.4095 | 20.99 | 1470 | 0.774 | 0.7731 | 0.4836 |
| 0.4066 | 21.99 | 1540 | 0.77 | 0.7690 | 0.4829 |
| 0.405 | 22.99 | 1610 | 0.773 | 0.7721 | 0.4822 |
| 0.3993 | 23.99 | 1680 | 0.77 | 0.7692 | 0.4827 |
| 0.3977 | 24.99 | 1750 | 0.4831 | 0.772 | 0.7712 |
| 0.398 | 25.99 | 1820 | 0.4830 | 0.774 | 0.7733 |
| 0.3969 | 26.99 | 1890 | 0.4815 | 0.771 | 0.7701 |
| 0.3945 | 27.99 | 1960 | 0.4818 | 0.772 | 0.7712 |
| 0.3929 | 28.99 | 2030 | 0.4818 | 0.773 | 0.7722 |
| 0.3887 | 29.99 | 2100 | 0.4817 | 0.773 | 0.7722 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
btjiong/robbert-twitter-sentiment-tokenized | 4d63b4a44f9920bba176619f2f2df784d153315c | 2022-04-07T17:54:02.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:dutch_social",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | btjiong | null | btjiong/robbert-twitter-sentiment-tokenized | 10 | null | transformers | 11,810 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- dutch_social
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: robbert-twitter-sentiment-tokenized
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: dutch_social
type: dutch_social
args: dutch_social
metrics:
- name: Accuracy
type: accuracy
value: 0.814
- name: F1
type: f1
value: 0.8132800039281481
- name: Precision
type: precision
value: 0.8131073640029836
- name: Recall
type: recall
value: 0.814
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robbert-twitter-sentiment-tokenized
This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on the dutch_social dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5473
- Accuracy: 0.814
- F1: 0.8133
- Precision: 0.8131
- Recall: 0.814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.6895 | 1.0 | 282 | 0.6307 | 0.7433 | 0.7442 | 0.7500 | 0.7433 |
| 0.4948 | 2.0 | 564 | 0.5189 | 0.8053 | 0.8062 | 0.8081 | 0.8053 |
| 0.2642 | 3.0 | 846 | 0.5473 | 0.814 | 0.8133 | 0.8131 | 0.814 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
yj2773/hinglish11k-sentiment-analysis | f01fc28259b14bf235957d93385a6cbc1bdde866 | 2022-06-12T12:47:00.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"ur",
"hi",
"transformers",
"license:afl-3.0"
]
| text-classification | false | yj2773 | null | yj2773/hinglish11k-sentiment-analysis | 10 | null | transformers | 11,811 | ---
license: afl-3.0
language:
- en
- ur
- hi
widget:
- text: "Tum bohot badiya ho."
---
## Hinglish-Bert-Class fine-tuned on Hinglish11K dataset.
# MCC= 0.69
### Citation info
```bibtex
@model{
contributors= {Mohammad Yusuf Jamal Aziz Azmi and
Ayush Aggarwal
},
year = {2022},
timestamp = {Sun, 08 May 2022},
}
``` |
jackmleitch/distilbert-base-uncased-finetuned-emotion | a47b5a079c5e7d58f77cb47e16ed2fdffa7a34a3 | 2022-04-07T17:53:26.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | jackmleitch | null | jackmleitch/distilbert-base-uncased-finetuned-emotion | 10 | null | transformers | 11,812 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9285
- name: F1
type: f1
value: 0.9284954323264266
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2120
- Accuracy: 0.9285
- F1: 0.9285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8093 | 1.0 | 250 | 0.3064 | 0.908 | 0.9049 |
| 0.2429 | 2.0 | 500 | 0.2120 | 0.9285 | 0.9285 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
palakagl/bert_TextClassification | 29cad5ea6d17a6289a80017e08c32ceb69ce700c | 2022-04-07T17:18:20.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:palakagl/autotrain-data-PersonalAssitant",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | palakagl | null | palakagl/bert_TextClassification | 10 | null | transformers | 11,813 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- palakagl/autotrain-data-PersonalAssitant
co2_eq_emissions: 7.025108874009706
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 717221787
- CO2 Emissions (in grams): 7.025108874009706
## Validation Metrics
- Loss: 0.35467109084129333
- Accuracy: 0.9186046511627907
- Macro F1: 0.9202890631142154
- Micro F1: 0.9186046511627907
- Weighted F1: 0.9185859051606837
- Macro Precision: 0.921802482563032
- Micro Precision: 0.9186046511627907
- Weighted Precision: 0.9210238644296779
- Macro Recall: 0.9218155764486292
- Micro Recall: 0.9186046511627907
- Weighted Recall: 0.9186046511627907
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/palakagl/autotrain-PersonalAssitant-717221787
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("palakagl/autotrain-PersonalAssitant-717221787", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("palakagl/autotrain-PersonalAssitant-717221787", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
ahmednasser/DistilBert-FakeNews | e3015860de6007ef933eab86151a9deee6d2c85c | 2022-04-20T16:29:21.000Z | [
"pytorch",
"distilbert",
"en",
"dataset:Fake News https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset",
"arxiv:1910.01108",
"transformers",
"text-classification",
"fake-news"
]
| text-classification | false | ahmednasser | null | ahmednasser/DistilBert-FakeNews | 10 | null | transformers | 11,814 | ---
language:
- en
tags:
- text-classification
- fake-news
- pytorch
datasets:
- Fake News https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset
metrics:
- Accuracy, AUC
---
## Model description:
[Distilbert](https://arxiv.org/abs/1910.01108) is created with knowledge distillation during the pre-training phase which reduces the size of a BERT model by 40%, while retaining 97% of its language understanding. It's smaller, faster than Bert and any other Bert-based model.
[Distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) finetuned on the fake news dataset with below Hyperparameters
```
learning rate 5e-5,
batch size 32,
num_train_epochs=2,
```
Full code available @ [DistilBert-FakeNews](https://github.com/anasserhussien/DistilBert-FakeNews)
Dataset available @ [Fake News dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset)
|
tbosse/bert-base-german-cased-finetuned-subj_v5_7Epoch | f7da0d169e0eff649d3d9b620fa5d9f89e04ae6f | 2022-04-07T20:49:22.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tbosse | null | tbosse/bert-base-german-cased-finetuned-subj_v5_7Epoch | 10 | null | transformers | 11,815 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj_v5_7Epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj_v5_7Epoch
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3036
- Precision: 0.7983
- Recall: 0.7781
- F1: 0.7881
- Accuracy: 0.9073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 32 | 0.3438 | 0.6970 | 0.7107 | 0.7038 | 0.8626 |
| No log | 2.0 | 64 | 0.2747 | 0.7688 | 0.7472 | 0.7578 | 0.8902 |
| No log | 3.0 | 96 | 0.2683 | 0.7827 | 0.7893 | 0.7860 | 0.8981 |
| No log | 4.0 | 128 | 0.2768 | 0.8024 | 0.7528 | 0.7768 | 0.9027 |
| No log | 5.0 | 160 | 0.2881 | 0.8102 | 0.7556 | 0.7820 | 0.9060 |
| No log | 6.0 | 192 | 0.3006 | 0.7959 | 0.7669 | 0.7811 | 0.9040 |
| No log | 7.0 | 224 | 0.3036 | 0.7983 | 0.7781 | 0.7881 | 0.9073 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Shwetabh/wikineural-multilingual-ner | 111cc8a5408716a9eccbe1d67520b10ba4dc4961 | 2022-04-10T05:37:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Shwetabh | null | Shwetabh/wikineural-multilingual-ner | 10 | null | transformers | 11,816 | Entry not found |
dpazmino/finetuning-sentiment-model_duke_final | 4f865642f950524519491c69d970a87d4805b512 | 2022-04-10T18:34:53.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | dpazmino | null | dpazmino/finetuning-sentiment-model_duke_final | 10 | null | transformers | 11,817 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: finetuning-sentiment-model_duke_final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model_duke_final
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4776
- F1: 0.8708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ChrisZeng/twitter-roberta-base-efl-hateval | ec979b7388bf1f31ed195a42a4fdc21a7ce37e11 | 2022-04-11T19:25:01.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | ChrisZeng | null | ChrisZeng/twitter-roberta-base-efl-hateval | 10 | null | transformers | 11,818 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: twitter-roberta-base-efl-hateval
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-efl-hateval
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2021-124m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) on the HatEval dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.7913
- F1: 0.7899
- Loss: 0.3683
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:------:|:---------------:|
| 0.5392 | 1.0 | 211 | 0.7 | 0.6999 | 0.4048 |
| 0.3725 | 2.0 | 422 | 0.759 | 0.7584 | 0.3489 |
| 0.3158 | 3.0 | 633 | 0.7613 | 0.7570 | 0.3287 |
| 0.289 | 4.0 | 844 | 0.769 | 0.7684 | 0.3307 |
| 0.2716 | 5.0 | 1055 | 0.7767 | 0.7750 | 0.3241 |
| 0.2575 | 6.0 | 1266 | 0.7787 | 0.7782 | 0.3272 |
| 0.2441 | 7.0 | 1477 | 0.7783 | 0.7776 | 0.3258 |
| 0.2363 | 8.0 | 1688 | 0.7777 | 0.7773 | 0.3316 |
| 0.2262 | 9.0 | 1899 | 0.7843 | 0.7815 | 0.3150 |
| 0.2191 | 10.0 | 2110 | 0.7813 | 0.7802 | 0.3241 |
| 0.2112 | 11.0 | 2321 | 0.7867 | 0.7860 | 0.3276 |
| 0.2047 | 12.0 | 2532 | 0.7897 | 0.7886 | 0.3266 |
| 0.1973 | 13.0 | 2743 | 0.7893 | 0.7884 | 0.3299 |
| 0.1897 | 14.0 | 2954 | 0.792 | 0.7907 | 0.3301 |
| 0.1862 | 15.0 | 3165 | 0.794 | 0.7925 | 0.3283 |
| 0.1802 | 16.0 | 3376 | 0.7907 | 0.7903 | 0.3465 |
| 0.1764 | 17.0 | 3587 | 0.7937 | 0.7922 | 0.3393 |
| 0.1693 | 18.0 | 3798 | 0.7903 | 0.7893 | 0.3494 |
| 0.1666 | 19.0 | 4009 | 0.7943 | 0.7930 | 0.3486 |
| 0.1631 | 20.0 | 4220 | 0.7927 | 0.7917 | 0.3516 |
| 0.1609 | 21.0 | 4431 | 0.7907 | 0.7893 | 0.3537 |
| 0.1581 | 22.0 | 4642 | 0.7913 | 0.7902 | 0.3586 |
| 0.1548 | 23.0 | 4853 | 0.789 | 0.7884 | 0.3698 |
| 0.1535 | 24.0 | 5064 | 0.7893 | 0.7880 | 0.3622 |
| 0.1522 | 25.0 | 5275 | 0.7923 | 0.7909 | 0.3625 |
| 0.15 | 26.0 | 5486 | 0.7913 | 0.7899 | 0.3632 |
| 0.1479 | 27.0 | 5697 | 0.792 | 0.7909 | 0.3677 |
| 0.1441 | 28.0 | 5908 | 0.792 | 0.7909 | 0.3715 |
| 0.145 | 29.0 | 6119 | 0.792 | 0.7906 | 0.3681 |
| 0.1432 | 30.0 | 6330 | 0.7913 | 0.7899 | 0.3683 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Giyaseddin/distilroberta-base-finetuned-short-answer-assessment | 9a0f2b9ca32070da8e5e63e0e7b6f33f3db5038b | 2022-04-11T15:21:42.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:Short Question Answer Assessment Dataset",
"arxiv:1806.02847",
"transformers",
"license:apache-2.0"
]
| text-classification | false | Giyaseddin | null | Giyaseddin/distilroberta-base-finetuned-short-answer-assessment | 10 | 1 | transformers | 11,819 | ---
license: apache-2.0
language: en
library: transformers
other: distilroberta
datasets:
- Short Question Answer Assessment Dataset
---
# DistilRoBERTa base model for Short Question Answer Assessment
## Model description
The pre-trained model is a distilled version of the [RoBERTa-base model](https://huggingface.co/roberta-base). It follows the same training procedure as [DistilBERT](https://huggingface.co/distilbert-base-uncased).
The code for the distillation process can be found [here](https://github.com/huggingface/transformers/tree/master/examples/distillation).
This model is case-sensitive: it makes a difference between english and English.
The model has 6 layers, 768 dimension and 12 heads, totalizing 82M parameters (compared to 125M parameters for RoBERTa-base).
On average DistilRoBERTa is twice as fast as Roberta-base.
We encourage to check [RoBERTa-base model](https://huggingface.co/roberta-base) to know more about usage, limitations and potential biases.
This is a classification model that solves Short Question Answer Assessment task, finetuned [pretrained DistilRoBERTa model](https://huggingface.co/distilroberta-base) on
[Question Answer Assessment dataset](#)
## Intended uses & limitations
This can only be used for the kind of questions and answers provided by that are similar to the ones in the dataset of [Banjade et al.](https://aclanthology.org/W16-0520.pdf).
### How to use
You can use this model directly with a :
```python
>>> from transformers import pipeline
>>> classifier = pipeline("text-classification", model="Giyaseddin/distilroberta-base-finetuned-short-answer-assessment", return_all_scores=True)
>>> context = "To rescue a child who has fallen down a well, rescue workers fasten him to a rope, the other end of which is then reeled in by a machine. The rope pulls the child straight upward at steady speed."
>>> question = "How does the amount of tension in the rope compare to the downward force of gravity acting on the child?"
>>> ref_answer = "Since the child is being raised straight upward at a constant speed, the net force on the child is zero and all the forces balance. That means that the tension in the rope balances the downward force of gravity."
>>> student_answer = "The tension force is higher than the force of gravity."
>>>
>>> body = " [SEP] ".join([context, question, ref_answer, student_answer])
>>> raw_results = classifier([body])
>>> raw_results
[[{'label': 'LABEL_0', 'score': 0.0004029414849355817},
{'label': 'LABEL_1', 'score': 0.0005476847873069346},
{'label': 'LABEL_2', 'score': 0.998059093952179},
{'label': 'LABEL_3', 'score': 0.0009902542224153876}]]
>>> _LABELS_ID2NAME = {0: "correct", 1: "correct_but_incomplete", 2: "contradictory", 3: "incorrect"}
>>> results = []
>>> for result in raw_results:
for score in result:
results.append([
{_LABELS_ID2NAME[int(score["label"][-1:])]: "%.2f" % score["score"]}
])
>>> results
[[{'correct': '0.00'}],
[{'correct_but_incomplete': '0.00'}],
[{'contradictory': '1.00'}],
[{'incorrect': '0.00'}]]
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
[the bias of its teacher model](https://huggingface.co/bert-base-uncased#limitations-and-bias).
This bias will also affect all fine-tuned versions of this model.
Also one of the limiations of this model is the length, longer sequences would lead to wrong predictions, due to the pre-processing phase (after concatentating the input sequences, the important student answer might be pruned!)
## Pre-training data
## Training data
The RoBERTa model was pretrained on the reunion of five datasets:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books;
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers) ;
- [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/), a dataset containing 63 millions English news
articles crawled between September 2016 and February 2019.
- [OpenWebText](https://github.com/jcpeterson/openwebtext), an opensource recreation of the WebText dataset used to
train GPT-2,
- [Stories](https://arxiv.org/abs/1806.02847) a dataset containing a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas.
Together theses datasets weight 160GB of text.
## Fine-tuning data
The annotated dataset consists of 900 students’ short constructed answers and their correctness in the given context. Four qualitative levels of correctness are defined, correct, correct-but-incomplete, contradictory and Incorrect.
## Training procedure
### Preprocessing
In the preprocessing phase, the following parts are concatenated: _question context_, _question_, _reference_answer_, and _student_answer_ using the separator `[SEP]`.
This makes the full text as:
```
[CLS] Context Sentence [SEP] Question Sentence [SEP] Reference Answer Sentence [SEP] Student Answer Sentence [CLS]
```
The data are splitted according to the following ratio:
- Training set 80%.
- Test set 20%.
Lables are mapped as: `{0: "correct", 1: "correct_but_incomplete", 2: "contradictory", 3: "incorrect"}`
### Fine-tuning
The model was finetuned on GeForce GTX 960M for 20 minuts. The parameters are:
| Parameter | Value |
|:-------------------:|:-----:|
| Learning rate | 5e-5 |
| Weight decay | 0.01 |
| Training batch size | 8 |
| Epochs | 4 |
Here is the scores during the training:
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
|:----------:|:-------------:|:-----------------:|:----------:|:---------:|:----------:|:--------:|
| 1 | No log | 0.773334 | 0.713706 | 0.711398 | 0.746059 | 0.713706 |
| 2 | 1.069200 | 0.404932 | 0.885279 | 0.884592 | 0.886699 | 0.885279 |
| 3 | 0.473700 | 0.247099 | 0.931980 | 0.931675 | 0.933794 | 0.931980 |
| 3 | 0.228000 | 0.205577 | 0.954315 | 0.954210 | 0.955258 | 0.954315 |
## Evaluation results
When fine-tuned on downstream task of Question Answer Assessment 4 class classification, this model achieved the following results:
(scores are rounded to 2 floating points)
| | precision | recall | f1-score | support |
|:------------------------:|:----------:|:-------:|:--------:|:-------:|
| _correct_ | 0.933 | 0.992 | 0.962 | 366 |
| _correct_but_incomplete_ | 0.976 | 0.934 | 0.954 | 257 |
| _contradictory_ | 0.938 | 0.929 | 0.933 | 113 |
| _incorrect_ | 0.975 | 0.932 | 0.953 | 249 |
| accuracy | - | - | 0.954 | 985 |
| macro avg | 0.955 | 0.947 | 0.950 | 985 |
| weighted avg | 0.955 | 0.954 | 0.954 | 985 |
Confusion matrix:
| Actual \ Predicted | _correct_ | _correct_but_incomplete_ | _contradictory_ | _incorrect_ |
|:------------------------:|:---------:|:------------------------:|:---------------:|:-----------:|
| _correct_ | 363 | 3 | 0 | 0 |
| _correct_but_incomplete_ | 14 | 240 | 0 | 3 |
| _contradictory_ | 5 | 0 | 105 | 3 |
| _incorrect_ | 7 | 3 | 7 | 232 |
The AUC score is: 'micro'= **0.9695** and 'macro': **0.9650**
|
Zainab18/wikineural-multilingual-ner | 636a4486b2c5ee27da393e99675f02b9537b84ff | 2022-04-11T11:02:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Zainab18 | null | Zainab18/wikineural-multilingual-ner | 10 | null | transformers | 11,820 | Entry not found |
Shiva12/wikineural-multilingual-ner | 1b1844a6cabb0fa43d1491ed9e674561ce304e9f | 2022-04-11T16:48:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Shiva12 | null | Shiva12/wikineural-multilingual-ner | 10 | null | transformers | 11,821 | Entry not found |
tbosse/bert-base-german-cased-finetuned-subj_v5_11Epoch | f81716fe3f58d282c21cf541b65b47d604fe5ab9 | 2022-04-11T17:08:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tbosse | null | tbosse/bert-base-german-cased-finetuned-subj_v5_11Epoch | 10 | null | transformers | 11,822 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj_v5_11Epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj_v5_11Epoch
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3467
- Precision: 0.8240
- Recall: 0.8287
- F1: 0.8263
- Accuracy: 0.9198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 32 | 0.3485 | 0.6992 | 0.7051 | 0.7021 | 0.8639 |
| No log | 2.0 | 64 | 0.2679 | 0.7947 | 0.7612 | 0.7776 | 0.8994 |
| No log | 3.0 | 96 | 0.2555 | 0.8073 | 0.8118 | 0.8095 | 0.9112 |
| No log | 4.0 | 128 | 0.2591 | 0.8290 | 0.8034 | 0.8160 | 0.9132 |
| No log | 5.0 | 160 | 0.2808 | 0.8450 | 0.8118 | 0.8281 | 0.9158 |
| No log | 6.0 | 192 | 0.2953 | 0.8386 | 0.8174 | 0.8279 | 0.9172 |
| No log | 7.0 | 224 | 0.3164 | 0.8347 | 0.8371 | 0.8359 | 0.9204 |
| No log | 8.0 | 256 | 0.3267 | 0.8329 | 0.8258 | 0.8293 | 0.9178 |
| No log | 9.0 | 288 | 0.3373 | 0.8268 | 0.8315 | 0.8291 | 0.9198 |
| No log | 10.0 | 320 | 0.3450 | 0.8324 | 0.8230 | 0.8277 | 0.9211 |
| No log | 11.0 | 352 | 0.3467 | 0.8240 | 0.8287 | 0.8263 | 0.9198 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
veddm/paraphrase-multilingual-MiniLM-L12-v2-finetuned-DIT | 8153676d28e97a52e83178c39918f2ce8f379dc7 | 2022-04-12T15:46:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-generation | false | veddm | null | veddm/paraphrase-multilingual-MiniLM-L12-v2-finetuned-DIT | 10 | null | transformers | 11,823 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: paraphrase-multilingual-MiniLM-L12-v2-finetuned-DIT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paraphrase-multilingual-MiniLM-L12-v2-finetuned-DIT
This model is a fine-tuned version of [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.4783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 91 | 9.2794 |
| No log | 2.0 | 182 | 8.1920 |
| No log | 3.0 | 273 | 7.6378 |
| No log | 4.0 | 364 | 7.4783 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Salesforce/codegen-16B-nl | 3b002d57ed722e369199c1430923fe0b7e2402de | 2022-06-28T18:08:08.000Z | [
"pytorch",
"codegen",
"text-generation",
"arxiv:2203.13474",
"transformers",
"license:bsd-3-clause"
]
| text-generation | false | Salesforce | null | Salesforce/codegen-16B-nl | 10 | 2 | transformers | 11,824 | ---
license: bsd-3-clause
---
# CodeGen (CodeGen-NL 16B)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-NL 16B** in the paper, where "NL" means it is pre-trained on the Pile and "16B" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-NL 16B) was pre-trained on [the Pile](https://github.com/EleutherAI/the-pile), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai/). Parts of the dataset include code data.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-16B-nl")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-16B-nl")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
Intel/albert-base-v2-sst2-int8-static | aed2b7c8c238bec371b744a926597fa54026dc3e | 2022-06-10T02:41:02.000Z | [
"pytorch",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"text-classfication",
"int8",
"Intel® Neural Compressor",
"PostTrainingStatic",
"license:apache-2.0"
]
| text-classification | false | Intel | null | Intel/albert-base-v2-sst2-int8-static | 10 | 0 | transformers | 11,825 | ---
language:
- en
license: apache-2.0
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingStatic
datasets:
- glue
metrics:
- accuracy
model_index:
- name: sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metric:
name: Accuracy
type: accuracy
value: 0.9254587155963303
---
# INT8 albert-base-v2-sst2
### Post-training static quantization
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [Alireza1044/albert-base-v2-sst2](https://huggingface.co/Alireza1044/albert-base-v2-sst2).
The calibration dataloader is the train dataloader. The default calibration sampling size 300 isn't divisible exactly by batch size 8, so the real sampling size is 304.
The linear modules **albert.encoder.albert_layer_groups.0.albert_layers.0.ffn_output.module, albert.encoder.albert_layer_groups.0.albert_layers.0.ffn.module** fall back to fp32 to meet the 1% relative accuracy loss.
### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-accuracy)** |0.9255|0.9232|
| **Model size (MB)** |25|44.6|
### Load with Intel® Neural Compressor:
```python
from neural_compressor.utils.load_huggingface import OptimizedModel
int8_model = OptimizedModel.from_pretrained(
'Intel/albert-base-v2-sst2-int8-static',
)
```
|
Helsinki-NLP/opus-mt-tc-big-en-hu | 8ec0362c0de8e9d7d5b238b58845b8a65087b366 | 2022-06-01T13:04:00.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"hu",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-en-hu | 10 | null | transformers | 11,826 | ---
language:
- en
- hu
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-hu
results:
- task:
name: Translation eng-hun
type: translation
args: eng-hun
dataset:
name: flores101-devtest
type: flores_101
args: eng hun devtest
metrics:
- name: BLEU
type: bleu
value: 29.6
- task:
name: Translation eng-hun
type: translation
args: eng-hun
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-hun
metrics:
- name: BLEU
type: bleu
value: 38.7
- task:
name: Translation eng-hun
type: translation
args: eng-hun
dataset:
name: newstest2009
type: wmt-2009-news
args: eng-hun
metrics:
- name: BLEU
type: bleu
value: 20.3
---
# opus-mt-tc-big-en-hu
Neural machine translation model for translating from English (en) to Hungarian (hu).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-02-25
* source language(s): eng
* target language(s): hun
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-02-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hun/opusTCv20210807+bt_transformer-big_2022-02-25.zip)
* more information released models: [OPUS-MT eng-hun README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hun/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"I wish I hadn't seen such a horrible film.",
"She's at school."
]
model_name = "pytorch-models/opus-mt-tc-big-en-hu"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Bárcsak ne láttam volna ilyen szörnyű filmet.
# Iskolában van.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-hu")
print(pipe("I wish I hadn't seen such a horrible film."))
# expected output: Bárcsak ne láttam volna ilyen szörnyű filmet.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-02-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hun/opusTCv20210807+bt_transformer-big_2022-02-25.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hun/opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| eng-hun | tatoeba-test-v2021-08-07 | 0.62096 | 38.7 | 13037 | 79562 |
| eng-hun | flores101-devtest | 0.60159 | 29.6 | 1012 | 22183 |
| eng-hun | newssyscomb2009 | 0.51918 | 20.6 | 502 | 9733 |
| eng-hun | newstest2009 | 0.50973 | 20.3 | 2525 | 54965 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 17:21:20 EEST 2022
* port machine: LM0-400-22516.local
|
omar47/wav2vec2-large-xls-r-300m-urdu-colab | 4d3c1b05bff9375203a0c3ee95c46da1cc25a9df | 2022-04-13T19:52:55.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | omar47 | null | omar47/wav2vec2-large-xls-r-300m-urdu-colab | 10 | null | transformers | 11,827 | Entry not found |
x180/macbert4csc-scalarmix-base-chinese | 51f73a48a28ce312b747c8a6e50fab2446335a1c | 2022-04-14T05:53:56.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | x180 | null | x180/macbert4csc-scalarmix-base-chinese | 10 | 1 | transformers | 11,828 | ---
license: apache-2.0
---
## 介绍
> 基于macbert对mask language model微调,进行错字修改。
这个是在[shibing624/macbert4csc-base-chinese](https://huggingface.co/shibing624/macbert4csc-base-chinese/tree/main)的基础上进行修改,
其对应的 [源码位置](https://github.com/shibing624/pycorrector/tree/master/pycorrector/macbert)。
## 使用
可参考[shibing624/macbert4csc-base-chinese](https://huggingface.co/shibing624/macbert4csc-base-chinese)。
## 改动
主要改动两个地方:
1. MLM和错字检测二分类超参改成0.9和0.1(当然不一定是最优参数)。
2. 对错字检测二分类引入一个ScalarMix layer,原代码使用hidden_states最后一层,个人觉得稍微有点深以及学习起来可能更复杂。
## 思考
整体下来错字检测二分类对整体模型效果影响并没有很突出,以及整体模型效果并没有超出原作者多少,所以上传这个代码以及模型更多是为了学习记录与思考。
其以[pycorrector eval.py](https://github.com/shibing624/pycorrector/blob/master/pycorrector/utils/eval.py)跑出来的结果如下:
corpus数据集:
```
Sentence Level: acc:0.7200, precision:0.8804, recall:0.6154, f1:0.7244, cost time:5.67 s
```
sighan2015数据集:
```
Sentence Level: acc:0.7973, precision:0.8265, recall:0.7459, f1:0.7841, cost time:11.19 s
```
|
huggingtweets/jeffbezos | e5a5a7abee5a2646fc0081ffe91a2c1f33369cd2 | 2022-05-27T11:34:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/jeffbezos | 10 | null | transformers | 11,829 | ---
language: en
thumbnail: http://www.huggingtweets.com/jeffbezos/1653651235626/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/669103856106668033/UF3cgUk4_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jeff Bezos</div>
<div style="text-align: center; font-size: 14px;">@jeffbezos</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jeff Bezos.
| Data | Jeff Bezos |
| --- | --- |
| Tweets downloaded | 346 |
| Retweets | 25 |
| Short tweets | 20 |
| Tweets kept | 301 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jxv4rw0y/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jeffbezos's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/pcrlflzk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/pcrlflzk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jeffbezos')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
MartinoMensio/racism-models-regression-w-m-vote-epoch-4 | e38d990c7cd14098a5b804a3105f24b61de7ee90 | 2022-05-04T16:22:45.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"transformers",
"license:mit"
]
| text-classification | false | MartinoMensio | null | MartinoMensio/racism-models-regression-w-m-vote-epoch-4 | 10 | null | transformers | 11,830 | ---
language: es
license: mit
widget:
- text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!"
---
### Description
This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022)
We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022)
We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models:
| method | epoch 1 | epoch 3 | epoch 3 | epoch 4 |
|--- |--- |--- |--- |--- |
| raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) |
| m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) |
| m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) |
| regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) |
| w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) |
| w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) |
This model is `regression-w-m-vote-epoch-4`
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
from transformers.pipelines import TextClassificationPipeline
class TextRegressionPipeline(TextClassificationPipeline):
"""
Class based on the TextClassificationPipeline from transformers.
The difference is that instead of being based on a classifier, it is based on a regressor.
You can specify the regression threshold when you call the pipeline or when you instantiate the pipeline.
"""
def __init__(self, **kwargs):
"""
Builds a new Pipeline based on regression.
regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label.
"""
self.regression_threshold = kwargs.pop("regression_threshold", None)
super().__init__(**kwargs)
def __call__(self, *args, **kwargs):
"""
You can also specify the regression threshold when you call the pipeline.
regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label.
"""
self.regression_threshold_call = kwargs.pop("regression_threshold", None)
result = super().__call__(*args, **kwargs)
return result
def postprocess(self, model_outputs, function_to_apply=None, return_all_scores=False):
outputs = model_outputs["logits"][0]
outputs = outputs.numpy()
scores = outputs
score = scores[0]
regression_threshold = self.regression_threshold
# override the specific threshold if it is specified in the call
if self.regression_threshold_call:
regression_threshold = self.regression_threshold_call
if regression_threshold:
return {"label": 'racist' if score > regression_threshold else 'non-racist', "score": score}
else:
return {"score": score}
model_name = 'regression-w-m-vote-epoch-4'
tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
full_model_path = f'MartinoMensio/racism-models-{model_name}'
model = AutoModelForSequenceClassification.from_pretrained(full_model_path)
pipe = TextRegressionPipeline(model=model, tokenizer=tokenizer)
texts = [
'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!',
'Es que los judíos controlan el mundo'
]
# just get the score of regression
print(pipe(texts))
# [{'score': 0.8345461}, {'score': 0.48615143}]
# or also specify a threshold to cut racist/non-racist
print(pipe(texts, regression_threshold=0.9))
# [{'label': 'non-racist', 'score': 0.8345461}, {'label': 'non-racist', 'score': 0.48615143}]
```
For more details, see https://github.com/preyero/neatclass22
|
paulagarciaserrano/roberta-depression-detection | 30514b351f5c553299e5b8500d29630df0337768 | 2022-05-05T13:42:30.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:Shared task on Detecting Signs of Depression from Social Media Text at LT-EDI 2022-ACL 2022",
"transformers"
]
| text-classification | false | paulagarciaserrano | null | paulagarciaserrano/roberta-depression-detection | 10 | null | transformers | 11,831 | ---
language: "en"
datasets:
- Shared task on Detecting Signs of Depression from Social Media Text at LT-EDI 2022-ACL 2022
metrics:
- Macro F1-Score
---
# Roberta for depression signs detection
This model is a fine-tuned version the <a href="https://huggingface.co/cardiffnlp/twitter-roberta-base">cardiffnlp/twitter-roberta-base</a> model. It has been trained using a recently published corpus: <a href="https://competitions.codalab.org/competitions/36410#learn_the_details">Shared task on Detecting Signs of Depression from Social Media Text at LT-EDI 2022-ACL 2022</a>.
The obtained macro f1-score is 0.54, on the development set of the competition.
# Intended uses
This model is trained to classify the given text into one of the following classes: *moderate*, *severe*, or *not depression*.
It corresponds to a **multiclass classification** task.
# How to use
You can use this model directly with a pipeline for text classification:
```python
>>> from transformers import pipeline
>>> classifier = pipeline("text-classification", model="paulagarciaserrano/roberta-depression-detection")
>>> your_text = "I am very sad."
>>> classifier (your_text)
```
# Training and evaluation data
The **train** dataset characteristics are:
<table>
<tr>
<th>Class</th>
<th>Nº sentences</th>
<th>Avg. document length (in sentences)</th>
<th>Nº words</th>
<th>Avg. sentence length (in words)</th>
</tr>
<tr>
<th>not depression</th>
<td>7,884</td>
<td>4</td>
<td>153,738</td>
<td>78</td>
</tr>
<tr>
<th>moderate</th>
<td>36,114</td>
<td>6</td>
<td>601,900</td>
<td>100</td>
</tr>
<tr>
<th>severe</th>
<td>9,911</td>
<td>11</td>
<td>126,140</td>
<td>140</td>
</tr>
</table>
Similarly, the **evaluation** dataset characteristics are:
<table>
<tr>
<th>Class</th>
<th>Nº sentences</th>
<th>Avg. document length (in sentences)</th>
<th>Nº words</th>
<th>Avg. sentence length (in words)</th>
</tr>
<tr>
<th>not depression</th>
<td>3,660</td>
<td>2</td>
<td>10,980</td>
<td>6</td>
</tr>
<tr>
<th>moderate</th>
<td>66,874</td>
<td>29</td>
<td>804,794</td>
<td>349</td>
</tr>
<tr>
<th>severe</th>
<td>2,880</td>
<td>8</td>
<td>75,240</td>
<td>209</td>
</tr>
</table>
# Training hyperparameters
The following hyperparameters were used during training:
* learning_rate: 2e-05
* evaluation_strategy: epoch
* save_strategy: epoch
* per_device_train_batch_size: 8
* per_device_eval_batch_size: 8
* num_train_epochs: 5
* seed: 10
* weight_decay: 0.01
* metric_for_best_model: macro-f1 |
theta/MBTI-ckiplab-albert | e8aba53863fcd2c63205bf6bb91cb219cc5ea07c | 2022-05-14T12:00:14.000Z | [
"pytorch",
"albert",
"text-classification",
"zh",
"transformers",
"MBTI",
"zh-tw",
"generated_from_trainer",
"model-index"
]
| text-classification | false | theta | null | theta/MBTI-ckiplab-albert | 10 | null | transformers | 11,832 | ---
language:
- zh
tags:
- MBTI
- zh
- zh-tw
- generated_from_trainer
model-index:
- name: MBTI-ckiplab-albert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MBTI-ckiplab-albert
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
xysmalobia/distilbert-base-uncased-finetuned-emotion | 6ec49cdb94f5dc3c0d79ef6eba3902c366b37d16 | 2022-04-17T20:09:18.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | xysmalobia | null | xysmalobia/distilbert-base-uncased-finetuned-emotion | 10 | null | transformers | 11,833 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9227457538297092
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2161
- Accuracy: 0.923
- F1: 0.9227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8365 | 1.0 | 250 | 0.3102 | 0.9075 | 0.9051 |
| 0.246 | 2.0 | 500 | 0.2161 | 0.923 | 0.9227 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1
|
stevenlx96/distilbert-base-uncased-finetuned-hated | 3a9503e30167f42331022bf69301bab25256a9b8 | 2022-04-19T11:30:24.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | stevenlx96 | null | stevenlx96/distilbert-base-uncased-finetuned-hated | 10 | null | transformers | 11,834 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-hated
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-hated
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5042
- Accuracy: 0.8135
- F1: 0.8127
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7267 | 1.0 | 215 | 0.5443 | 0.7832 | 0.7833 |
| 0.4548 | 2.0 | 430 | 0.5042 | 0.8135 | 0.8127 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
GPL/bioasq-msmarco-distilbert-gpl | 56eeded4f649cb026346fa35c8a37e7114e39590 | 2022-04-19T16:41:44.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | GPL | null | GPL/bioasq-msmarco-distilbert-gpl | 10 | null | sentence-transformers | 11,835 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Kateryna/eva_ru_forum_headlines | 2d3d0cf874260663f497b5feb0cd691dc5d43faf | 2022-04-21T02:19:58.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ru",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Kateryna | null | Kateryna/eva_ru_forum_headlines | 10 | null | transformers | 11,836 | ---
language:
- ru
widget:
- text: "Я влюбилась в одного парня. Каждый раз, когда он меня видит, он плюется и переходит на другую сторону улицы. Как вы думаете, он меня любит?"
- text: "Дочке 15, книг не читает, вся жизнь (вне школы) в телефоне на кровати. Любознательности ноль. Куда-то поехать в новое место, узнать что-то, найти интересные курсы - вообще не про нее. Учеба все хуже, багажа знаний уже нет, списывает и выкручивается в течение четверти, как контрольная или что-то посерьезнее, где не списать - на 2-3. При любой возможности не ходит в школу (голова болит, можно сегодня не пойду. а потом пятница, что на один день ходить...)"
- "Ребёнок учится в 8 классе. По алгебре одни тройки. Но это точно 2. Просто учитель не будет ставить в четверти 2. Она гуманитарий. Алгебра никак не идёт. Репетитор сейчас занимается, понимает только лёгкие темы. Я боюсь, что провалит ОГЭ. Там пересдать можно? А если опять 2,это второй год?"
---
# eva_ru_forum_headlines
## Model Description
The model was trained on forum topics names and first posts (100 - 150 words). It generates short headlines (3 - 5 words) in the opposite to headlines from models trained on newspaper articles.
"I do not know how to title this post" can be a valid headline.
"What would you do in my place?" is one of the most popular headline.
### Usage
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name = "Kateryna/eva_ru_forum_headlines"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Я влюбилась в одного парня. Каждый раз, когда он меня видит, он плюется и переходит на другую сторону улицы. Как вы думаете, он меня любит?"
input_ids = tokenizer(
[text],
max_length=150,
add_special_tokens=True,
padding="max_length",
truncation=True,
return_tensors="pt"
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
max_length=25,
num_beams=4,
repetition_penalty=5.0,
no_repeat_ngram_size=4
)[0]
headline = tokenizer.decode(output_ids, skip_special_tokens=True)
print(headline)
```
### Training and Validation
Training dataset: https://huggingface.co/datasets/Kateryna/eva_ru_forum_headlines
From all available posts and topics names I selected only posts and abstractive topic names e.g. the topic name does not match exactly anything in the correspondent post.
The base model is cointegrated/rut5-base
Training parameters:
- max_source_tokens_count = 150
- max_target_tokens_count = 25
- learning_rate = 0.0007
- num_train_epochs = 3
- batch_size = 8
- gradient_accumulation_steps = 96
ROUGE and BLUE scores were not very helpful to choose a best model.
I manually estimated ~100 results in each candidate model.
1. The less gradient_accumulation_steps the more abstractive headlines but they becomes less and less related to the correspondent posts. The worse model with gradient_accumulation_steps = 1 had all headlines abstractive but random.
2. The source for the model is real short texts created by ordinary persons without any editing. In many cases, the forum posts are not connected sentences and it is not clear what the author wanted to say or discuss. Sometimes there is a contradiction in the text and only the real topic name reveals what this all about. Naturally the model fails to produce a good headline in such cases.
https://github.com/KaterynaD/eva.ru/tree/main/Code/Notebooks/9.%20Headlines
|
nirmalkumar/distilledgpt2-cric-commentary | 48422d894e5348447996279ca5c4ec89fe764fe4 | 2022-04-20T12:02:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | nirmalkumar | null | nirmalkumar/distilledgpt2-cric-commentary | 10 | null | transformers | 11,837 | Entry not found |
thanawan/bert-base-uncased-finetuned-humordetection | 63863e231f6bfb7a621f6347d1326d33388d027e | 2022-04-21T06:35:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | thanawan | null | thanawan/bert-base-uncased-finetuned-humordetection | 10 | null | transformers | 11,838 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-uncased-finetuned-humordetection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-humordetection
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3136
- F1: 0.9586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 375 | 0.1768 | 0.9507 |
| 0.2266 | 2.0 | 750 | 0.1910 | 0.9553 |
| 0.08 | 3.0 | 1125 | 0.2822 | 0.9529 |
| 0.0194 | 4.0 | 1500 | 0.2989 | 0.9560 |
| 0.0194 | 5.0 | 1875 | 0.3136 | 0.9586 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Goud/DarijaBERT-summarization-goud | e6cd677a42ff8fffe7fd18c835e94bedde93d3d5 | 2022-04-29T15:07:03.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"Moroccan Arabic (MA)",
"Modern Standard Arabic (MSA)",
"dataset:Goud/Goud-sum",
"transformers",
"summarization",
"autotrain_compatible"
]
| summarization | false | Goud | null | Goud/DarijaBERT-summarization-goud | 10 | 1 | transformers | 11,839 | ---
datasets:
- Goud/Goud-sum
language:
- "Moroccan Arabic (MA)"
- "Modern Standard Arabic (MSA)"
metrics:
- rouge
tags:
- summarization
widget:
-
text: "توصل الاتحاد الأوروبي، في وقت مبكر من اليوم السبت، إلى اتفاق تاريخي يستهدف خطاب الكراهية والمعلومات المضللة والمحتويات الضارة الأخرى الموجودة على شبكة الإنترنيت. وحسب تقارير صحفية، سيجبر القانون شركات التكنولوجيا الكبرى على مراقبة نفسها بشكل أكثر صرامة، ويسهل على المستخدمين الإبلاغ عن المشاكل، ويمكن الاتفاق المنظمين من معاقبة الشركات غير الممتثلة بغرامات تقدر بالملايير. ويركز الاتفاق على قواعد جديدة تتطلب من شركات التكنولوجيا العملاقة بذل المزيد من الجهد لمراقبة المحتوى على منصاتها ودفع رسوم للجهات المنظمة التي تراقب مدى امتثالها. ويعد قانون الخدمات الرقمية الشق الثاني من إستراتيجية المفوضة الأوروبية لشؤون المنافسة، مارغريت فيستاغر، للحد من هيمنة وحدة غوغل التابعة لألفابت، وميتا (فيسبوك سابقا) وغيرهما من شركات التكنولوجيا الأمريكية العملاقة. وقالت فيستاغر في تغريدة “توصلنا إلى اتفاق بشأن قانون الخدمات الرقمية، موضحة أن القانون سيضمن أن ما يعتبر غير قانوني في حالة عدم الاتصال بالشبكة ينظر إليه أيضا ويتم التعامل معه على أنه غير قانوني عبر الشبكة (الإنترنت) – ليس كشعار (ولكن) كواقع”. وتواجه الشركات بموجب قانون الخدمات الرقمية غرامات تصل إلى 6 في المائة من إجمالي عملياتها على مستوى العالم لانتهاك القواعد بينما قد تؤدي الانتهاكات المتكررة إلى حظرها من ممارسة أعمالها في الاتحاد الأوروبي. وأيدت دول الاتحاد والمشرعون الشهر الماضي القواعد التي طرحتها فيستاغر والمسماة قانون الأسواق الرقمية التي قد تجبر غوغل وأمازون وأبل وميتا وميكروسوفت على تغيير ممارساتها الأساسية في أوروبا. "
---
This model was introduced in [this paper](https://openreview.net/forum?id=BMVq5MELb9). It is an encoder-decoder model that was initialized with [DarijaBERT](https://huggingface.co/Kamel/DarijaBERT) checkpoint. The model is finetuned for text summarization on [Goud dataset](https://huggingface.co/datasets/Goud/Goud-sum).
## How to use
This is how you can use this model
```python
from transformers import EncoderDecoderModel, BertTokenizer
article = """توصل الاتحاد الأوروبي، في وقت مبكر من اليوم السبت، إلى اتفاق تاريخي يستهدف خطاب الكراهية والمعلومات المضللة والمحتويات الضارة الأخرى الموجودة على شبكة الإنترنيت.
وحسب تقارير صحفية، سيجبر القانون شركات التكنولوجيا الكبرى على مراقبة نفسها بشكل أكثر صرامة، ويسهل على المستخدمين الإبلاغ عن المشاكل، ويمكن الاتفاق المنظمين من معاقبة الشركات غير الممتثلة بغرامات تقدر بالملايير.
ويركز الاتفاق على قواعد جديدة تتطلب من شركات التكنولوجيا العملاقة بذل المزيد من الجهد لمراقبة المحتوى على منصاتها ودفع رسوم للجهات المنظمة التي تراقب مدى امتثالها.
ويعد قانون الخدمات الرقمية الشق الثاني من إستراتيجية المفوضة الأوروبية لشؤون المنافسة، مارغريت فيستاغر، للحد من هيمنة وحدة غوغل التابعة لألفابت، وميتا (فيسبوك سابقا) وغيرهما من شركات التكنولوجيا الأمريكية العملاقة.
وقالت فيستاغر في تغريدة “توصلنا إلى اتفاق بشأن قانون الخدمات الرقمية، موضحة أن القانون سيضمن أن ما يعتبر غير قانوني في حالة عدم الاتصال بالشبكة ينظر إليه أيضا ويتم التعامل معه على أنه غير قانوني عبر الشبكة (الإنترنت) – ليس كشعار (ولكن) كواقع”.
وتواجه الشركات بموجب قانون الخدمات الرقمية غرامات تصل إلى 6 في المائة من إجمالي عملياتها على مستوى العالم لانتهاك القواعد بينما قد تؤدي الانتهاكات المتكررة إلى حظرها من ممارسة أعمالها في الاتحاد الأوروبي.
وأيدت دول الاتحاد والمشرعون الشهر الماضي القواعد التي طرحتها فيستاغر والمسماة قانون الأسواق الرقمية التي قد تجبر غوغل وأمازون وأبل وميتا وميكروسوفت على تغيير ممارساتها الأساسية في أوروبا.
"""
tokenizer = BertTokenizer.from_pretrained("Goud/DarijaBERT-summarization-goud")
model = EncoderDecoderModel.from_pretrained("Goud/DarijaBERT-summarization-goud")
input_ids = tokenizer(article, return_tensors="pt", truncation=True, padding=True).input_ids
generated = model.generate(input_ids)[0]
output = tokenizer.decode(generated, skip_special_tokens=True)
```
## Citation Information
```
@inproceedings{issam2022goudma,
title={Goud.ma: a News Article Dataset for Summarization in Moroccan Darija},
author={Abderrahmane Issam and Khalil Mrini},
booktitle={3rd Workshop on African Natural Language Processing},
year={2022},
url={https://openreview.net/forum?id=BMVq5MELb9}
}
``` |
Goud/AraBERT-summarization-goud | ffff7a63b12d84267ce3fd1921cf3687ba76e9be | 2022-04-29T15:06:47.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"Moroccan Arabic (MA)",
"Modern Standard Arabic (MSA)",
"dataset:Goud/Goud-sum",
"transformers",
"summarization",
"autotrain_compatible"
]
| summarization | false | Goud | null | Goud/AraBERT-summarization-goud | 10 | null | transformers | 11,840 | ---
datasets:
- Goud/Goud-sum
language:
- "Moroccan Arabic (MA)"
- "Modern Standard Arabic (MSA)"
metrics:
- rouge
tags:
- summarization
widget:
-
text: "توصل الاتحاد الأوروبي، في وقت مبكر من اليوم السبت، إلى اتفاق تاريخي يستهدف خطاب الكراهية والمعلومات المضللة والمحتويات الضارة الأخرى الموجودة على شبكة الإنترنيت. وحسب تقارير صحفية، سيجبر القانون شركات التكنولوجيا الكبرى على مراقبة نفسها بشكل أكثر صرامة، ويسهل على المستخدمين الإبلاغ عن المشاكل، ويمكن الاتفاق المنظمين من معاقبة الشركات غير الممتثلة بغرامات تقدر بالملايير. ويركز الاتفاق على قواعد جديدة تتطلب من شركات التكنولوجيا العملاقة بذل المزيد من الجهد لمراقبة المحتوى على منصاتها ودفع رسوم للجهات المنظمة التي تراقب مدى امتثالها. ويعد قانون الخدمات الرقمية الشق الثاني من إستراتيجية المفوضة الأوروبية لشؤون المنافسة، مارغريت فيستاغر، للحد من هيمنة وحدة غوغل التابعة لألفابت، وميتا (فيسبوك سابقا) وغيرهما من شركات التكنولوجيا الأمريكية العملاقة. وقالت فيستاغر في تغريدة “توصلنا إلى اتفاق بشأن قانون الخدمات الرقمية، موضحة أن القانون سيضمن أن ما يعتبر غير قانوني في حالة عدم الاتصال بالشبكة ينظر إليه أيضا ويتم التعامل معه على أنه غير قانوني عبر الشبكة (الإنترنت) – ليس كشعار (ولكن) كواقع”. وتواجه الشركات بموجب قانون الخدمات الرقمية غرامات تصل إلى 6 في المائة من إجمالي عملياتها على مستوى العالم لانتهاك القواعد بينما قد تؤدي الانتهاكات المتكررة إلى حظرها من ممارسة أعمالها في الاتحاد الأوروبي. وأيدت دول الاتحاد والمشرعون الشهر الماضي القواعد التي طرحتها فيستاغر والمسماة قانون الأسواق الرقمية التي قد تجبر غوغل وأمازون وأبل وميتا وميكروسوفت على تغيير ممارساتها الأساسية في أوروبا. "
---
This model was introduced in [this paper](https://openreview.net/forum?id=BMVq5MELb9). It is an encoder-decoder model that was initialized with [bert-base-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-base-arabertv02-twitter) checkpoint. The model is finetuned for text summarization on [Goud dataset](https://huggingface.co/datasets/Goud/Goud-sum).
## How to use
This is how you can use this model
```python
from transformers import EncoderDecoderModel, BertTokenizer
article = """توصل الاتحاد الأوروبي، في وقت مبكر من اليوم السبت، إلى اتفاق تاريخي يستهدف خطاب الكراهية والمعلومات المضللة والمحتويات الضارة الأخرى الموجودة على شبكة الإنترنيت.
وحسب تقارير صحفية، سيجبر القانون شركات التكنولوجيا الكبرى على مراقبة نفسها بشكل أكثر صرامة، ويسهل على المستخدمين الإبلاغ عن المشاكل، ويمكن الاتفاق المنظمين من معاقبة الشركات غير الممتثلة بغرامات تقدر بالملايير.
ويركز الاتفاق على قواعد جديدة تتطلب من شركات التكنولوجيا العملاقة بذل المزيد من الجهد لمراقبة المحتوى على منصاتها ودفع رسوم للجهات المنظمة التي تراقب مدى امتثالها.
ويعد قانون الخدمات الرقمية الشق الثاني من إستراتيجية المفوضة الأوروبية لشؤون المنافسة، مارغريت فيستاغر، للحد من هيمنة وحدة غوغل التابعة لألفابت، وميتا (فيسبوك سابقا) وغيرهما من شركات التكنولوجيا الأمريكية العملاقة.
وقالت فيستاغر في تغريدة “توصلنا إلى اتفاق بشأن قانون الخدمات الرقمية، موضحة أن القانون سيضمن أن ما يعتبر غير قانوني في حالة عدم الاتصال بالشبكة ينظر إليه أيضا ويتم التعامل معه على أنه غير قانوني عبر الشبكة (الإنترنت) – ليس كشعار (ولكن) كواقع”.
وتواجه الشركات بموجب قانون الخدمات الرقمية غرامات تصل إلى 6 في المائة من إجمالي عملياتها على مستوى العالم لانتهاك القواعد بينما قد تؤدي الانتهاكات المتكررة إلى حظرها من ممارسة أعمالها في الاتحاد الأوروبي.
وأيدت دول الاتحاد والمشرعون الشهر الماضي القواعد التي طرحتها فيستاغر والمسماة قانون الأسواق الرقمية التي قد تجبر غوغل وأمازون وأبل وميتا وميكروسوفت على تغيير ممارساتها الأساسية في أوروبا.
"""
tokenizer = BertTokenizer.from_pretrained("Goud/AraBERT-summarization-goud")
model = EncoderDecoderModel.from_pretrained("Goud/AraBERT-summarization-goud")
input_ids = tokenizer(article, return_tensors="pt", truncation=True, padding=True).input_ids
generated = model.generate(input_ids)[0]
output = tokenizer.decode(generated, skip_special_tokens=True)
```
## Citation Information
```
@inproceedings{issam2022goudma,
title={Goud.ma: a News Article Dataset for Summarization in Moroccan Darija},
author={Abderrahmane Issam and Khalil Mrini},
booktitle={3rd Workshop on African Natural Language Processing},
year={2022},
url={https://openreview.net/forum?id=BMVq5MELb9}
}
``` |
AswiN037/xlm-roberta-squad-tamil | aed867e234a43d47d65716dfc6a9d8f9130ea07a | 2022-05-31T04:15:42.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"license:osl-3.0",
"autotrain_compatible"
]
| question-answering | false | AswiN037 | null | AswiN037/xlm-roberta-squad-tamil | 10 | null | transformers | 11,841 | ---
license: osl-3.0
---
Question Answering model
|
surrey-nlp/albert-large-v2-finetuned-abbDet | a5356083cb6317203b97fe016a9d42e264613599 | 2022-04-30T12:15:44.000Z | [
"pytorch",
"albert",
"token-classification",
"en",
"dataset:surrey-nlp/PLOD-unfiltered",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | surrey-nlp | null | surrey-nlp/albert-large-v2-finetuned-abbDet | 10 | 1 | transformers | 11,842 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- surrey-nlp/PLOD-unfiltered
metrics:
- precision
- recall
- f1
- accuracy
language:
- en
widget:
- text: "Light dissolved inorganic carbon (DIC) resulting from the oxidation of hydrocarbons."
- text: "RAFs are plotted for a selection of neurons in the dorsal zone (DZ) of auditory cortex in Figure 1."
- text: "Images were acquired using a GE 3.0T MRI scanner with an upgrade for echo-planar imaging (EPI)."
model-index:
- name: albert-large-v2-finetuned-ner_with_callbacks
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: surrey-nlp/PLOD-unfiltered
type: token-classification
args: PLODunfiltered
metrics:
- name: Precision
type: precision
value: 0.9655166719570215
- name: Recall
type: recall
value: 0.9608483288141474
- name: F1
type: f1
value: 0.9631768437660728
- name: Accuracy
type: accuracy
value: 0.9589410429715819
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-large-v2-finetuned-ner_with_callbacks
This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the [PLOD-unfiltered](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1235
- Precision: 0.9655
- Recall: 0.9608
- F1: 0.9632
- Accuracy: 0.9589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1377 | 0.49 | 7000 | 0.1294 | 0.9563 | 0.9422 | 0.9492 | 0.9436 |
| 0.1244 | 0.98 | 14000 | 0.1165 | 0.9589 | 0.9504 | 0.9546 | 0.9499 |
| 0.107 | 1.48 | 21000 | 0.1140 | 0.9603 | 0.9509 | 0.9556 | 0.9511 |
| 0.1088 | 1.97 | 28000 | 0.1086 | 0.9613 | 0.9551 | 0.9582 | 0.9536 |
| 0.0918 | 2.46 | 35000 | 0.1059 | 0.9617 | 0.9582 | 0.9600 | 0.9556 |
| 0.0847 | 2.95 | 42000 | 0.1067 | 0.9620 | 0.9586 | 0.9603 | 0.9559 |
| 0.0734 | 3.44 | 49000 | 0.1188 | 0.9646 | 0.9588 | 0.9617 | 0.9574 |
| 0.0725 | 3.93 | 56000 | 0.1065 | 0.9660 | 0.9599 | 0.9630 | 0.9588 |
| 0.0547 | 4.43 | 63000 | 0.1273 | 0.9662 | 0.9602 | 0.9632 | 0.9590 |
| 0.0542 | 4.92 | 70000 | 0.1235 | 0.9655 | 0.9608 | 0.9632 | 0.9589 |
| 0.0374 | 5.41 | 77000 | 0.1401 | 0.9647 | 0.9613 | 0.9630 | 0.9586 |
| 0.0417 | 5.9 | 84000 | 0.1380 | 0.9641 | 0.9622 | 0.9632 | 0.9588 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Sarim24/distilbert-base-uncased-finetuned-clinc | 1b83ff5ab738567d549ea2014a2391088b579192 | 2022-04-23T19:40:13.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Sarim24 | null | Sarim24/distilbert-base-uncased-finetuned-clinc | 10 | null | transformers | 11,843 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9116129032258065
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7730
- Accuracy: 0.9116
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.3075 | 0.7416 |
| 3.8069 | 2.0 | 636 | 1.8792 | 0.8384 |
| 3.8069 | 3.0 | 954 | 1.1514 | 0.8939 |
| 1.6848 | 4.0 | 1272 | 0.8567 | 0.9077 |
| 0.8902 | 5.0 | 1590 | 0.7730 | 0.9116 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
magistermilitum/roberta-multilingual-medieval-ner | bee8c0fec9cadf4aeeb2488df4d5fba7582bd653 | 2022-04-24T21:42:00.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"Latin",
"French",
"Spanish",
"transformers",
"text",
"named entity recognition",
"roberta",
"historical languages",
"precision",
"recall",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | magistermilitum | null | magistermilitum/roberta-multilingual-medieval-ner | 10 | null | transformers | 11,844 | ---
language:
- Latin
- French
- Spanish
license: cc-by-nc-4.0
tags:
- text # Example: audio
- named entity recognition
- roberta
- historical languages
- precision # Example: wer. Use metric id from https://hf.co/metrics
- recall
model-index:
- name: roberta-multilingual-medieval-ner
results:
- task:
type: named entity recognition # Required. Example: automatic-speech-recognition
metrics:
- type: precision
value: 98.01
- type: Recall
value: 97.08
inference:
parameters:
aggregation_strategy: 'simple'
widget:
- text: "In nomine sanctæ et individuæ Trinitatis. Ego Guido, Dei gratia Cathalaunensis episcopus, propter inevitabilem temporum mutationem et casum decedentium quotidie personarum, necesse habemus litteris annotare quod dampnosa delere non possit oblivio. Eapropter notum fieri volumus tam futuris quam presentibus quod, pro remedio animæ meæ et predecessorum nostrorum, abbati et fratribus de Insula altare de Hattunmaisnil dedimus et perpetuo habendum concessimus, salvis custumiis nostris et archidiaconi loci illius. Ne hoc ergo malignorum hominum perversitate aut temporis alteratur incommodo presentem paginam sigilli nostri impressione firmavimus, testibus subnotatis : S. Raynardy capellani, Roberti Armensis, Mathei de Waisseio, Michaeli decani, Hugonis de Monasterio, Hervaudi de Panceio. Data per manum Gerardi cancellarii, anno ab incarnatione Domini millesimo centesimo septuagesimo octavo. "
---
|
Miranda/t5-small-train | a5911178380da9aac8bfe2aea88ba7f050ff6551 | 2022-04-30T20:50:31.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| summarization | false | Miranda | null | Miranda/t5-small-train | 10 | null | transformers | 11,845 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-train
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2367
- Rouge1: 43.9525
- Rouge2: 22.3403
- Rougel: 38.7683
- Rougelsum: 39.2056
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.6e-05
- train_batch_size: 9
- eval_batch_size: 9
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 3.3237 | 1.0 | 40 | 2.6713 | 34.4731 | 14.9731 | 29.4814 | 29.9747 |
| 2.7401 | 2.0 | 80 | 2.4318 | 38.1153 | 18.3492 | 33.4476 | 33.9181 |
| 2.5882 | 3.0 | 120 | 2.3339 | 41.2707 | 19.8571 | 36.2685 | 36.6119 |
| 2.4264 | 4.0 | 160 | 2.2878 | 42.184 | 20.9666 | 37.3488 | 37.6172 |
| 2.3915 | 5.0 | 200 | 2.2605 | 43.4928 | 21.7195 | 38.4917 | 38.8471 |
| 2.3599 | 6.0 | 240 | 2.2462 | 44.2876 | 22.28 | 38.9234 | 39.3673 |
| 2.3073 | 7.0 | 280 | 2.2398 | 43.9822 | 22.3746 | 38.7625 | 39.0964 |
| 2.3026 | 8.0 | 320 | 2.2367 | 43.9525 | 22.3403 | 38.7683 | 39.2056 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
kSaluja/new-test-model | a99f00dfdea802f1414e9a67732b97f0f76ef908 | 2022-04-25T13:43:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | kSaluja | null | kSaluja/new-test-model | 10 | null | transformers | 11,846 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: new-test-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# new-test-model
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0962
- Precision: 0.9704
- Recall: 0.9766
- F1: 0.9735
- Accuracy: 0.9791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 151 | 0.1872 | 0.9295 | 0.9405 | 0.9349 | 0.9535 |
| No log | 2.0 | 302 | 0.1417 | 0.9574 | 0.9652 | 0.9613 | 0.9679 |
| No log | 3.0 | 453 | 0.1028 | 0.9676 | 0.9693 | 0.9684 | 0.9742 |
| 0.3037 | 4.0 | 604 | 0.1063 | 0.9676 | 0.9696 | 0.9686 | 0.9743 |
| 0.3037 | 5.0 | 755 | 0.0962 | 0.9704 | 0.9766 | 0.9735 | 0.9791 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
kSaluja/new-test-model2 | f4687337caccc3ad4a978baa58da488f0ceee11b | 2022-05-02T12:58:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | kSaluja | null | kSaluja/new-test-model2 | 10 | null | transformers | 11,847 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: new-test-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# new-test-model2
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1040
- Precision: 0.9722
- Recall: 0.9757
- F1: 0.9739
- Accuracy: 0.9808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 151 | 0.1819 | 0.9360 | 0.9405 | 0.9382 | 0.9540 |
| No log | 2.0 | 302 | 0.1196 | 0.9637 | 0.9639 | 0.9638 | 0.9703 |
| No log | 3.0 | 453 | 0.1322 | 0.9614 | 0.9682 | 0.9648 | 0.9711 |
| 0.2764 | 4.0 | 604 | 0.1071 | 0.9677 | 0.9725 | 0.9701 | 0.9763 |
| 0.2764 | 5.0 | 755 | 0.1084 | 0.9709 | 0.9766 | 0.9737 | 0.9790 |
| 0.2764 | 6.0 | 906 | 0.1015 | 0.9717 | 0.9739 | 0.9728 | 0.9791 |
| 0.0342 | 7.0 | 1057 | 0.1208 | 0.9686 | 0.9727 | 0.9706 | 0.9785 |
| 0.0342 | 8.0 | 1208 | 0.1068 | 0.9680 | 0.9752 | 0.9716 | 0.9798 |
| 0.0342 | 9.0 | 1359 | 0.1028 | 0.9719 | 0.9743 | 0.9731 | 0.9807 |
| 0.0129 | 10.0 | 1510 | 0.1040 | 0.9722 | 0.9757 | 0.9739 | 0.9808 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
chrisvanriemsdijk/finetuned-layoutlmv2-klippa | 4259c412bd3426a615d6a873deb370e36c515517 | 2022-04-29T17:57:15.000Z | [
"pytorch",
"layoutlmv2",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | chrisvanriemsdijk | null | chrisvanriemsdijk/finetuned-layoutlmv2-klippa | 10 | null | transformers | 11,848 | Entry not found |
jason9693/KcELECTRA-small-v2022-apeach | c9c9f37c0d1d3ce52082979f7900c96073e3498e | 2022-04-27T12:04:14.000Z | [
"pytorch",
"electra",
"text-classification",
"ko",
"dataset:jason9693/APEACH",
"transformers"
]
| text-classification | false | jason9693 | null | jason9693/KcELECTRA-small-v2022-apeach | 10 | 1 | transformers | 11,849 | ---
language: ko
widget:
- text: "코딩을 🐶🍾👟같이 하니까 맨날 장애나잖아 이 🧑🦽아"
datasets:
- jason9693/APEACH
--- |
manueltonneau/bert-twitter-pt-is-hired | ddd6e2fdb7264f6d4b25632554d827ff390f1ff2 | 2022-04-27T09:01:09.000Z | [
"pytorch",
"bert",
"text-classification",
"pt",
"arxiv:2203.09178",
"transformers"
]
| text-classification | false | manueltonneau | null | manueltonneau/bert-twitter-pt-is-hired | 10 | null | transformers | 11,850 | ---
language: pt # <-- my language
widget:
- text: "Primeiro dia do novo emprego!"
---
# Detection of employment status disclosures on Twitter
## Model main characteristics:
- class: Is Hired (1), else (0)
- country: BR
- language: Portuguese
- architecture: BERT base
## Model description
This model is a version of `neuralmind/bert-base-portuguese-cased` finetuned to recognize Portuguese tweets where a user mentions that she was hired in the past month. It was trained on Portuguese tweets from users based in Brazil. The task is framed as a binary classification problem with:
- the positive class referring to tweets mentioning that a user was recently hired (label=1)
- the negative class referring to all other tweets (label=0)
## Resources
The dataset of Portuguese tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment).
Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178).
## Citation
If you find this model useful, please cite our paper (citation to come soon). |
peringe/finetuning-sentiment-model-3000-samples-pi | 216169adeef312c2ccee27061ffc52c0f075c401 | 2022-04-27T08:58:12.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | peringe | null | peringe/finetuning-sentiment-model-3000-samples-pi | 10 | null | transformers | 11,851 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples-pi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8664495114006515
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples-pi
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3344
- Accuracy: 0.8633
- F1: 0.8664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
nickmuchi/facebook-data2vec-finetuned-finance-classification | bc1802fd1bdae6a26accc59f7d40911c31df2563 | 2022-04-27T14:31:03.000Z | [
"pytorch",
"tensorboard",
"data2vec-text",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | nickmuchi | null | nickmuchi/facebook-data2vec-finetuned-finance-classification | 10 | null | transformers | 11,852 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: fb-data2vec-finetuned-finance-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fb-data2vec-finetuned-finance-classification
This model is a fine-tuned version of [facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8993
- Accuracy: 0.8557
- F1: 0.8563
- Precision: 0.8576
- Recall: 0.8557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 285 | 0.6704 | 0.6680 | 0.6262 | 0.7919 | 0.6680 |
| 0.6626 | 2.0 | 570 | 0.4731 | 0.8360 | 0.8350 | 0.8346 | 0.8360 |
| 0.6626 | 3.0 | 855 | 0.4598 | 0.8458 | 0.8454 | 0.8452 | 0.8458 |
| 0.3666 | 4.0 | 1140 | 0.4758 | 0.8360 | 0.8352 | 0.8353 | 0.8360 |
| 0.3666 | 5.0 | 1425 | 0.5683 | 0.8340 | 0.8342 | 0.8353 | 0.8340 |
| 0.2316 | 6.0 | 1710 | 0.6234 | 0.8419 | 0.8421 | 0.8447 | 0.8419 |
| 0.2316 | 7.0 | 1995 | 0.7186 | 0.8379 | 0.8385 | 0.8395 | 0.8379 |
| 0.1523 | 8.0 | 2280 | 0.7268 | 0.8439 | 0.8442 | 0.8455 | 0.8439 |
| 0.0928 | 9.0 | 2565 | 0.7364 | 0.8439 | 0.8452 | 0.8494 | 0.8439 |
| 0.0928 | 10.0 | 2850 | 0.7975 | 0.8478 | 0.8476 | 0.8476 | 0.8478 |
| 0.054 | 11.0 | 3135 | 0.9019 | 0.8498 | 0.8509 | 0.8554 | 0.8498 |
| 0.054 | 12.0 | 3420 | 0.8779 | 0.8538 | 0.8548 | 0.8578 | 0.8538 |
| 0.036 | 13.0 | 3705 | 0.8914 | 0.8617 | 0.8626 | 0.8652 | 0.8617 |
| 0.036 | 14.0 | 3990 | 0.8976 | 0.8538 | 0.8547 | 0.8572 | 0.8538 |
| 0.0232 | 15.0 | 4275 | 0.8993 | 0.8557 | 0.8563 | 0.8576 | 0.8557 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
yhshin/latex-ocr | 6828f527d83688d93942fa5311757b50c7b240ba | 2022-04-28T09:44:30.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers",
"license:mit"
]
| null | false | yhshin | null | yhshin/latex-ocr | 10 | null | transformers | 11,853 | ---
license: mit
---
|
cassiepowell/LaBSE-for-agreement | ed2306de9bbd55da0bf2f102d47fd9bd7b79a0d8 | 2022-04-28T17:56:09.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | cassiepowell | null | cassiepowell/LaBSE-for-agreement | 10 | null | transformers | 11,854 | Entry not found |
icity/distilbert-base-uncased-finetuned-imdb-accelerate | 0409ac972d0b588f1fc59af0b85052a6a31d8a9d | 2022-05-18T15:37:23.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | icity | null | icity/distilbert-base-uncased-finetuned-imdb-accelerate | 10 | null | transformers | 11,855 | Entry not found |
dipteshkanojia/scibert_scivocab_uncased-finetuned-ner | 13c9f7728cc5f246c2726ac19bd730697b328003 | 2022-04-28T22:49:03.000Z | [
"pytorch",
"bert",
"token-classification",
"dataset:plo_dunfiltered_config",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | dipteshkanojia | null | dipteshkanojia/scibert_scivocab_uncased-finetuned-ner | 10 | null | transformers | 11,856 | ---
tags:
- generated_from_trainer
datasets:
- plo_dunfiltered_config
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: scibert_scivocab_uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: plo_dunfiltered_config
type: plo_dunfiltered_config
args: PLODunfiltered
metrics:
- name: Precision
type: precision
value: 0.964925429790286
- name: Recall
type: recall
value: 0.9612323892385586
- name: F1
type: f1
value: 0.9630753691636831
- name: Accuracy
type: accuracy
value: 0.9593916827485913
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert_scivocab_uncased-finetuned-ner
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the plo_dunfiltered_config dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1390
- Precision: 0.9649
- Recall: 0.9612
- F1: 0.9631
- Accuracy: 0.9594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1176 | 1.4 | 5000 | 0.1243 | 0.9570 | 0.9511 | 0.9540 | 0.9502 |
| 0.0973 | 2.81 | 10000 | 0.1129 | 0.9609 | 0.9572 | 0.9590 | 0.9553 |
| 0.0721 | 4.21 | 15000 | 0.1198 | 0.9645 | 0.9585 | 0.9615 | 0.9578 |
| 0.0634 | 5.62 | 20000 | 0.1259 | 0.9649 | 0.9589 | 0.9619 | 0.9582 |
| 0.0572 | 7.02 | 25000 | 0.1321 | 0.9653 | 0.9609 | 0.9631 | 0.9594 |
| 0.0472 | 8.43 | 30000 | 0.1390 | 0.9649 | 0.9612 | 0.9631 | 0.9594 |
| 0.0434 | 9.83 | 35000 | 0.1442 | 0.9656 | 0.9613 | 0.9634 | 0.9598 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jg/distilbert-base-uncased-finetuned-emotion | 0d3044cab0c5268a4ada60e9ecda1d97ae38c276 | 2022-04-30T18:34:27.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | jg | null | jg/distilbert-base-uncased-finetuned-emotion | 10 | null | transformers | 11,857 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: F1
type: f1
value: 0.9235933186731068
- name: Accuracy
type: accuracy
value: 0.9235
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2199
- F1: 0.9236
- Accuracy: 0.9235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.8072 | 1.0 | 250 | 0.3153 | 0.9023 | 0.905 |
| 0.2442 | 2.0 | 500 | 0.2199 | 0.9236 | 0.9235 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Davincilee/door_inner_with_SA-bert-base-uncased | bffb90944cb3283150513179becd8622f64448ee | 2022-05-13T14:56:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | Davincilee | null | Davincilee/door_inner_with_SA-bert-base-uncased | 10 | null | transformers | 11,858 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: door_inner_with_SA-bert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# door_inner_with_SA-bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5492 | 1.0 | 96 | 2.3831 |
| 2.4031 | 2.0 | 192 | 2.2963 |
| 2.3391 | 3.0 | 288 | 2.2000 |
| 2.2951 | 4.0 | 384 | 2.2505 |
| 2.2151 | 5.0 | 480 | 2.1691 |
| 2.2237 | 6.0 | 576 | 2.1855 |
| 2.1984 | 7.0 | 672 | 2.2558 |
| 2.1749 | 8.0 | 768 | 2.2019 |
| 2.1475 | 9.0 | 864 | 2.1310 |
| 2.1446 | 10.0 | 960 | 2.1334 |
| 2.1374 | 11.0 | 1056 | 2.1909 |
| 2.1117 | 12.0 | 1152 | 2.2028 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
LiYouYou/BERT_MRPC | a0f405fc72b4829b38d9c786cdfef76f9e8519f4 | 2022-05-03T16:13:06.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | LiYouYou | null | LiYouYou/BERT_MRPC | 10 | null | transformers | 11,859 | Entry not found |
mrm8488/data2vec-text-base-finetuned-stsb | 207a645147225344224bb0d6dfb3174232496c1a | 2022-05-03T16:28:24.000Z | [
"pytorch",
"tensorboard",
"data2vec-text",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | mrm8488 | null | mrm8488/data2vec-text-base-finetuned-stsb | 10 | null | transformers | 11,860 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: data2vec-text-base-finetuned-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8716633516590501
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data2vec-text-base-finetuned-stsb
This model is a fine-tuned version of [facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5530
- Pearson: 0.8732
- Spearmanr: 0.8717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.725353773731373e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 5
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| No log | 1.0 | 180 | 1.0650 | 0.8102 | 0.8380 |
| No log | 2.0 | 360 | 0.6211 | 0.8524 | 0.8497 |
| 0.9312 | 3.0 | 540 | 0.5917 | 0.8640 | 0.8642 |
| 0.9312 | 4.0 | 720 | 0.5672 | 0.8695 | 0.8686 |
| 0.9312 | 5.0 | 900 | 0.5530 | 0.8732 | 0.8717 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
anshr/distilgpt2_trained_policy_model_final | 3ed5446ef838008807881ac01c85d952fa378f29 | 2022-05-05T03:01:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | anshr | null | anshr/distilgpt2_trained_policy_model_final | 10 | null | transformers | 11,861 | Entry not found |
ml4pubmed/bluebert-pubmed-uncased-L-12-H-768-A-12_pub_section | 7c3fa41bc6dc4d3316815b78d60b03eb75f29ca8 | 2022-05-04T00:03:25.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:pubmed",
"transformers"
]
| text-classification | false | ml4pubmed | null | ml4pubmed/bluebert-pubmed-uncased-L-12-H-768-A-12_pub_section | 10 | null | transformers | 11,862 | ---
language:
- en
datasets:
- pubmed
metrics:
- f1
pipeline_tag: text-classification
widget:
- text: "many pathogenic processes and diseases are the result of an erroneous activation of the complement cascade and a number of inhibitors of complement have thus been examined for anti-inflammatory actions."
example_title: "background example"
- text: "a total of 192 mi patients and 140 control persons were included."
example_title: "methods example"
- text: "mi patients had 18 % higher plasma levels of map44 (iqr 11-25 %) as compared to the healthy control group (p < 0. 001.)"
example_title: "results example"
- text: "the finding that a brief cb group intervention delivered by real-world providers significantly reduced mdd onset relative to both brochure control and bibliotherapy is very encouraging, although effects on continuous outcome measures were small or nonsignificant and approximately half the magnitude of those found in efficacy research, potentially because the present sample reported lower initial depression."
example_title: "conclusions example"
- text: "in order to understand and update the prevalence of myopia in taiwan, a nationwide survey was performed in 1995."
example_title: "objective example"
---
# bluebert-pubmed-uncased-L-12-H-768-A-12_pub_section
- original model file name: textclassifer_bluebert_pubmed_uncased_L-12_H-768_A-12_pubmed_20k
- This is a fine-tuned checkpoint of `bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12` for document section text classification
- possible document section classes are:BACKGROUND, CONCLUSIONS, METHODS, OBJECTIVE, RESULTS,
## metadata
### training_metrics
- val_accuracy: 0.8367536067962646
- val_matthewscorrcoef: 0.779039740562439
- val_f1score: 0.834040641784668
- val_cross_entropy: 0.5102494359016418
- epoch: 18.0
- train_accuracy_step: 0.7890625
- train_matthewscorrcoef_step: 0.7113237380981445
- train_f1score_step: 0.7884777784347534
- train_cross_entropy_step: 0.5615811944007874
- train_accuracy_epoch: 0.7955580949783325
- train_matthewscorrcoef_epoch: 0.7233519554138184
- train_f1score_epoch: 0.7916122078895569
- train_cross_entropy_epoch: 0.6050205230712891
- test_accuracy: 0.8310602307319641
- test_matthewscorrcoef: 0.7718994617462158
- test_f1score: 0.8283351063728333
- test_cross_entropy: 0.5230290293693542
- date_run: Apr-22-2022_t-05
- huggingface_tag: bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12
|
LiYouYou/bert_finetuning_cn | 76ec7027fa157caab0fabe5cd2f69abc0b5034d8 | 2022-05-04T05:36:19.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | LiYouYou | null | LiYouYou/bert_finetuning_cn | 10 | null | transformers | 11,863 | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert_finetuning_cn
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8314220183486238
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_finetuning_cn
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5440
- Accuracy: 0.8314
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jgriffi/distilbert-base-uncased-finetuned-emotion | a7c4f479c62bf3e7d6947ff73eeb4e90aa48e11e | 2022-07-13T12:52:36.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | jgriffi | null | jgriffi/distilbert-base-uncased-finetuned-emotion | 10 | null | transformers | 11,864 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9224581940083942
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2204
- Accuracy: 0.9225
- F1: 0.9225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8094 | 1.0 | 250 | 0.3034 | 0.905 | 0.9031 |
| 0.2416 | 2.0 | 500 | 0.2204 | 0.9225 | 0.9225 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Truefilter/bertweet_lg_text_quality | 2d4567a5dacded7f2c639703c0d7801adefec5f6 | 2022-05-05T11:42:31.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Truefilter | null | Truefilter/bertweet_lg_text_quality | 10 | null | transformers | 11,865 | Entry not found |
antgoldbloom/distilbert-rater | f5565c9b4a48f142fe51acd688d285ff0b44c96a | 2022-05-05T14:45:54.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | antgoldbloom | null | antgoldbloom/distilbert-rater | 10 | null | transformers | 11,866 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-rater
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-rater
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
annasham/DialoGPT-small-myneighborTotoro | 7590eacc761af2e8669bfc8421ea2fca6af26343 | 2022-05-06T00:54:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | annasham | null | annasham/DialoGPT-small-myneighborTotoro | 10 | null | transformers | 11,867 | ---
tags:
- conversational
---
Hi! This is a chat bot based on My Neighbor Totoro!
# My Neighbor Totoro DialoGPT Model |
JoMart/albert-base-v2 | eec13d3ca3a3b3c99873750ebce9102e68117358 | 2022-05-07T13:11:42.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | JoMart | null | JoMart/albert-base-v2 | 10 | null | transformers | 11,868 | ---
tags:
- generated_from_trainer
model-index:
- name: albert-base-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.024 | 1.0 | 4000 | 0.0300 |
| 0.0049 | 2.0 | 8000 | 0.0075 |
| 0.0 | 3.0 | 12000 | 0.0125 |
| 0.0 | 4.0 | 16000 | 0.0101 |
| 0.0056 | 5.0 | 20000 | 0.0104 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jg/distilbert-base-uncased-finetuned-spam | 8caa3f5651241c6c6da444e6afdf7274b24381a8 | 2022-05-06T16:52:11.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | jg | null | jg/distilbert-base-uncased-finetuned-spam | 10 | null | transformers | 11,869 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-spam
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-spam
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0325
- F1: 0.9910
- Accuracy: 0.9910
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.1523 | 1.0 | 79 | 0.0369 | 0.9892 | 0.9892 |
| 0.0303 | 2.0 | 158 | 0.0325 | 0.9910 | 0.9910 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
moghis/distilbert-base-uncased-finetuned-emotion | c72ec018f7ca015560025c540da6d09fbc554ff8 | 2022-05-10T18:44:13.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | moghis | null | moghis/distilbert-base-uncased-finetuned-emotion | 10 | null | transformers | 11,870 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240615969601907
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2141
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7828 | 1.0 | 250 | 0.2936 | 0.909 | 0.9070 |
| 0.2344 | 2.0 | 500 | 0.2141 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
paultimothymooney/distilbert-rater | 50a13f11b7b75a49fed265801c2cb49830b64f9c | 2022-05-10T17:40:47.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | paultimothymooney | null | paultimothymooney/distilbert-rater | 10 | null | transformers | 11,871 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-rater
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-rater
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
SalamaThanks/SalamaThanksTransformer_en2fil_v1 | c4c2512fc1cabdb2e361232435daa3d517e6aa43 | 2022-05-11T05:45:01.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
]
| text2text-generation | false | SalamaThanks | null | SalamaThanks/SalamaThanksTransformer_en2fil_v1 | 10 | null | transformers | 11,872 | ---
license: afl-3.0
---
SalamaThanks Transformer for English-to-Filipino Text Translation version 1.
Based on the Helsinki-NLP/opus-mt-en-tl transformer model. |
SalamaThanks/SalamaThanksTransformer_fil2en_v1 | 0ef6f667264da1f2b836fde18881c5ec8a55edcd | 2022-05-11T05:45:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
]
| text2text-generation | false | SalamaThanks | null | SalamaThanks/SalamaThanksTransformer_fil2en_v1 | 10 | null | transformers | 11,873 | ---
license: afl-3.0
---
SalamaThanks Transformer for Filipino-to-English Text Translation version 1.
Based on the Helsinki-NLP/opus-mt-tl-en transformer model. |
huggingtweets/medvedevrussia | 706187f6de4a7ae1a91bf9b977e19e93d3bdd4a0 | 2022-05-15T12:26:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/medvedevrussia | 10 | null | transformers | 11,874 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2348558617/x0vh6bui3sq97vt4jd2n_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Дмитрий Медведев</div>
<div style="text-align: center; font-size: 14px;">@medvedevrussia</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Дмитрий Медведев.
| Data | Дмитрий Медведев |
| --- | --- |
| Tweets downloaded | 1740 |
| Retweets | 300 |
| Short tweets | 48 |
| Tweets kept | 1392 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2s7c3vz9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @medvedevrussia's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1e00s9pz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1e00s9pz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/medvedevrussia')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
juliensimon/sagemaker-distilbert-emotion | cf9bcd3b4da3b1a17d021771a6405e745583cb2a | 2022-05-16T14:28:12.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | juliensimon | null | juliensimon/sagemaker-distilbert-emotion | 10 | null | transformers | 11,875 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.919
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2402
- Accuracy: 0.919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9163 | 1.0 | 500 | 0.2402 | 0.919 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
pietrolesci/t5v1_1-large-mnli | 24fd6cdd61a626591fd4a4b6b755044e2ccfab71 | 2022-05-17T09:23:39.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | pietrolesci | null | pietrolesci/t5v1_1-large-mnli | 10 | null | transformers | 11,876 | Entry not found |
vamossyd/bert-base-uncased-emotion | 54c9ed4bee3c652ffb5ada49369d606e6cb75540 | 2022-05-17T23:56:02.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:emotion",
"transformers",
"emotion",
"license:mit"
]
| text-classification | false | vamossyd | null | vamossyd/bert-base-uncased-emotion | 10 | null | transformers | 11,877 | ---
language:
- en
tags:
- text-classification
- emotion
- pytorch
license: mit
datasets:
- emotion
metrics:
- accuracy
- precision
- recall
- f1
---
# bert-base-uncased-emotion
## Model description
`bert-base-uncased` finetuned on the unify-emotion-datasets (https://github.com/sarnthil/unify-emotion-datasets) [~250K texts with 7 labels -- neutral, happy, sad, anger, disgust, surprise, fear], then transferred to
a small sample of 10K hand-tagged StockTwits messages. Optimized for extracting emotions from financial social media, such as StockTwits.
Sequence length 64, learning rate 2e-5, batch size 128, 8 epochs.
For more details, please visit https://github.com/dvamossy/EmTract.
## Training data
Data came from https://github.com/sarnthil/unify-emotion-datasets.
|
aakorolyova/primary_and_secondary_outcome_extraction | 2e17740e7592b88ce3cf015327ec3f4f1f0fbf72 | 2022-05-25T19:30:56.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | aakorolyova | null | aakorolyova/primary_and_secondary_outcome_extraction | 10 | null | transformers | 11,878 | <h1>Model description</h1>
This is a fine-tuned BioBERT model for extracting primary and secondary outcomes from articles reporting clinical trials.
This model is a version of https://huggingface.co/aakorolyova/primary_outcome_extraction. We have not annotated any secondary outcome during the related PhD project. To be able to extract secondary outcomes, we manually annotated secondary outcomes in the existing annotated sentences with primary outcomes (only a small percentage of sentences contains secondary outcomes) and performed automatic data augmentation by replacing "primary"/"main"/"principal" by "secondary" and changing tags from B/I-Prim to B/I-Sec in the primary outcomes data.
Model creator: Anna Koroleva
<h1>Intended uses & limitations</h1>
The model is intended to be used for extracting primary and secondary outcomes from texts of clinical trials.
The main limitation is that the model was trained on a mix of manually annotated and automatically augmented data, which might lead to inaccuracies in prediction.
<h1>How to use</h1>
The model should be used with the BioBERT tokeniser. A sample code for getting model predictions is below:
```
import numpy as np
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained('dmis-lab/biobert-v1.1')
model = AutoModelForTokenClassification.from_pretrained(r'aakorolyova/primary_and_secondary_outcome_extraction')
text = 'Primary endpoint was overall survival in patients with oesophageal squamous cell carcinoma and PD-L1 combined positive score (CPS) of 10 or more, secondary endpoints were overall survival and progression-free survival in patients with oesophageal squamous cell carcinoma, PD-L1 CPS of 10 or more, and in all randomised patients.'
encoded_input = tokenizer(text, padding=True, truncation=True, max_length=2000, return_tensors='pt')
output = model(**encoded_input)['logits']
output = np.argmax(output.detach().numpy(), axis=2)
print(output)
```
Some more useful functions can be found in or Github repository: https://github.com/aakorolyova/DeSpin-2.0
<h1>Training data</h1>
Training data can be found in https://github.com/aakorolyova/DeSpin-2.0/tree/main/data/Primary_Secondary_Outcomes
<h1>Training procedure</h1>
The model was fine-tuned using Huggingface Trainer API. Training scripts can be found in https://github.com/aakorolyova/DeSpin-2.0
<h1>Evaluation</h1>
Primary outcomes:
Precision: 92.22
Recall: 94.86
F1: 93.52
Secondary outcomes:
Precision: 91.43
Recall: 91.87
F1: 91.65
Overall precision: 91.79
Overall recall: 93.23
Overall F1: 92.51
|
aakorolyova/reported_outcome_extraction | efa98490b71a69fefd337ee4030d527dc98c3ac9 | 2022-05-25T19:31:52.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | aakorolyova | null | aakorolyova/reported_outcome_extraction | 10 | null | transformers | 11,879 | <h1>Model description</h1>
This is a fine-tuned BioBERT model for extracting reported outcomes (i.e. those for which results are presented) from articles reporting clinical trials.
This is the second version of the model; the original model development was reported in:
Anna Koroleva, Sanjay Kamath, Patrick Paroubek. Extracting primary and reported outcomes from articles reporting randomized controlled trials using pre-trained deep language representations. Preprint: https://easychair.org/publications/preprint/qpml
The original work was conducted within the scope of the Assisted authoring for avoiding inadequate claims in scientific reporting PhD project of the Methods for Research on Research (MiRoR, http://miror-ejd.eu/) program.
Model creator: Anna Koroleva
<h1>Intended uses & limitations</h1>
The model is intended to be used for extracting reported outcomes from texts of clinical trials.
The main limitation is that the model was trained on a fairly small sample of data annotated by a single annotator. Annotating more data or involvig more annotators was not possiblw within the PhD project.
<h1>How to use</h1>
The model should be used with the BioBERT tokeniser. A sample code for getting model predictions is below:
```
import numpy as np
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained('dmis-lab/biobert-v1.1')
model = AutoModelForTokenClassification.from_pretrained(r'aakorolyova/reported_outcome_extraction')
text = """Compared with placebo plus chemotherapy, pembrolizumab plus chemotherapy improved overall survival in patients with previously untreated, advanced oesophageal squamous cell carcinoma and PD-L1 CPS of 10 or more, and overall survival and progression-free survival in patients with oesophageal squamous cell carcinoma, PD-L1 CPS of 10 or more, and in all randomised patients regardless of histology, and had a manageable safety profile in the total as-treated population."""
encoded_input = tokenizer(text, padding=True, truncation=True, max_length=2000, return_tensors='pt')
output = model(**encoded_input)['logits']
output = np.argmax(output.detach().numpy(), axis=2)
print(output)
```
Some more useful functions can be found in or Github repository: https://github.com/aakorolyova/DeSpin-2.0
<h1>Training data</h1>
Training data can be found in https://github.com/aakorolyova/DeSpin-2.0/tree/main/data/Reported_Outcomes
<h1>Training procedure</h1>
The model was fine-tuned using Huggingface Trainer API. Training scripts can be found in https://github.com/aakorolyova/DeSpin-2.0
<h1>Evaluation</h1>
Precision: 65.57%
Recall: 74.77%
F1: 69.87% |
calcworks/distilbert-base-uncased-distilled-clinc | 511c42472e28fa8f5c3d0fe022d59845c7ceff93 | 2022-05-19T17:03:17.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | calcworks | null | calcworks/distilbert-base-uncased-distilled-clinc | 10 | null | transformers | 11,880 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9409677419354838
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1004
- Accuracy: 0.9410
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9037 | 1.0 | 318 | 0.5745 | 0.7326 |
| 0.4486 | 2.0 | 636 | 0.2866 | 0.8819 |
| 0.2537 | 3.0 | 954 | 0.1794 | 0.9210 |
| 0.1762 | 4.0 | 1272 | 0.1387 | 0.9294 |
| 0.1419 | 5.0 | 1590 | 0.1210 | 0.9358 |
| 0.1247 | 6.0 | 1908 | 0.1119 | 0.9413 |
| 0.1138 | 7.0 | 2226 | 0.1067 | 0.9387 |
| 0.1078 | 8.0 | 2544 | 0.1026 | 0.9423 |
| 0.1043 | 9.0 | 2862 | 0.1010 | 0.9413 |
| 0.102 | 10.0 | 3180 | 0.1004 | 0.9410 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
phijve/bert-finetuned-ner | 71dcfe8caf362f8b57411f30d3bbe449206e3eba | 2022-05-20T16:14:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | phijve | null | phijve/bert-finetuned-ner | 10 | null | transformers | 11,881 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9396951623591783
- name: Recall
type: recall
value: 0.9545607539548974
- name: F1
type: f1
value: 0.947069627650693
- name: Accuracy
type: accuracy
value: 0.9872843939483135
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0596
- Precision: 0.9397
- Recall: 0.9546
- F1: 0.9471
- Accuracy: 0.9873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0787 | 1.0 | 1756 | 0.0604 | 0.9250 | 0.9418 | 0.9333 | 0.9844 |
| 0.0318 | 2.0 | 3512 | 0.0578 | 0.9291 | 0.9502 | 0.9395 | 0.9860 |
| 0.0151 | 3.0 | 5268 | 0.0596 | 0.9397 | 0.9546 | 0.9471 | 0.9873 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
north/t5_xxl_NCC | 0e4e5c9add75f182e506d3593fba693b7cd9b288 | 2022-06-01T19:42:15.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"no",
"nn",
"sv",
"dk",
"is",
"en",
"dataset:nbailab/NCC",
"dataset:mc4",
"dataset:wikipedia",
"arxiv:2104.09617",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | north | null | north/t5_xxl_NCC | 10 | null | transformers | 11,882 | ---
language:
- no
- nn
- sv
- dk
- is
- en
datasets:
- nbailab/NCC
- mc4
- wikipedia
widget:
- text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede.
- text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den.
license: apache-2.0
---
-T5
The North-T5-models are a set of Norwegian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation.
| |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_|
|:-----------|:------------:|:------------:|:------------:|:------------:|:------------:|
|North-T5‑NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|[🤗](https://huggingface.co/north/t5_large_NCC)|[🤗](https://huggingface.co/north/t5_xl_NCC)|✔||
|North-T5‑NCC‑lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)||
## T5X Checkpoint
The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/xxl/norwegian_NCC_plus_English_t5x_xxl/).
## Performance
A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617).
|**Model:** | **F1** |
|:-----------|:------------|
|mT5-base|73.2 |
|mBERT-base|78.4 |
|NorBERT-base|78.2 |
|North-T5-small|80.5 |
|nb-bert-base|81.8 |
|North-T5-base|85.3 |
|North-T5-large|86.7 |
|North-T5-xl|88.7 |
|North-T5-xxl|91.8|
These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal.
## Sub-versions of North-T5
The following sub-versions are available. More versions will be available shorter.
|**Model** | **Description** |
|:-----------|:-------|
|**North‑T5‑NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.|
|**North‑T5‑NCC‑lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.|
## Fine-tuned versions
As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab.
Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used.
* Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base)
* DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base)
## Training details
All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources.
All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks.
While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab.
All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning.
## Formats
All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format.
## Future
I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users
## Thanks
This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running.
Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models.
Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X.
## Warranty
Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases.
## Contact/About
These models were trained by Per E Kummervold. Please contact me on [email protected].
|
connectivity/feather_berts_0 | 94d37ab096caabdeadb48adeac8d92c0b30363b7 | 2022-05-21T14:26:18.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/feather_berts_0 | 10 | null | transformers | 11,883 | Entry not found |
lucifermorninstar011/autotrain-lucifer_name-894029080 | 6580bb759039290e191215b65c2fce7de361ae37 | 2022-05-21T23:38:09.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:lucifermorninstar011/autotrain-data-lucifer_name-980af516",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
]
| token-classification | false | lucifermorninstar011 | null | lucifermorninstar011/autotrain-lucifer_name-894029080 | 10 | null | transformers | 11,884 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- lucifermorninstar011/autotrain-data-lucifer_name-980af516
co2_eq_emissions: 0.9017791642156402
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 894029080
- CO2 Emissions (in grams): 0.9017791642156402
## Validation Metrics
- Loss: 0.06416810303926468
- Accuracy: 0.975037269594738
- Precision: 0.845205809019728
- Recall: 0.8450117531296124
- F1: 0.8451087699347763
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/lucifermorninstar011/autotrain-lucifer_name-894029080
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("lucifermorninstar011/autotrain-lucifer_name-894029080", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("lucifermorninstar011/autotrain-lucifer_name-894029080", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
danieleV9H/hubert-base-libri-clean-ft100h-v3 | f76c1e266571407584957032da6ab7f6e6d543f0 | 2022-05-26T10:42:52.000Z | [
"pytorch",
"tensorboard",
"hubert",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | danieleV9H | null | danieleV9H/hubert-base-libri-clean-ft100h-v3 | 10 | null | transformers | 11,885 | ---
license: apache-2.0
tags:
- generated_from_trainer
- hf-asr-leaderboard
- hf-asr-leaderboard
datasets:
- librispeech_asr
model-index:
- name: hubert-base-libri-clean-ft100h-v3
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: '8.1938'
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: '16.9783'
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-libri-clean-ft100h-v3
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1120
- Wer: 0.1332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.201 | 0.14 | 250 | 3.9799 | 1.0 |
| 2.8893 | 0.28 | 500 | 3.4838 | 1.0 |
| 2.8603 | 0.42 | 750 | 3.3505 | 1.0 |
| 2.7216 | 0.56 | 1000 | 2.1194 | 0.9989 |
| 1.3372 | 0.7 | 1250 | 0.8124 | 0.6574 |
| 0.8238 | 0.84 | 1500 | 0.5712 | 0.5257 |
| 0.6449 | 0.98 | 1750 | 0.4442 | 0.4428 |
| 0.5241 | 1.12 | 2000 | 0.3442 | 0.3672 |
| 0.4458 | 1.26 | 2250 | 0.2850 | 0.3186 |
| 0.3959 | 1.4 | 2500 | 0.2507 | 0.2882 |
| 0.3641 | 1.54 | 2750 | 0.2257 | 0.2637 |
| 0.3307 | 1.68 | 3000 | 0.2044 | 0.2434 |
| 0.2996 | 1.82 | 3250 | 0.1969 | 0.2313 |
| 0.2794 | 1.96 | 3500 | 0.1823 | 0.2193 |
| 0.2596 | 2.1 | 3750 | 0.1717 | 0.2096 |
| 0.2563 | 2.24 | 4000 | 0.1653 | 0.2000 |
| 0.2532 | 2.38 | 4250 | 0.1615 | 0.1971 |
| 0.2376 | 2.52 | 4500 | 0.1559 | 0.1916 |
| 0.2341 | 2.66 | 4750 | 0.1494 | 0.1855 |
| 0.2102 | 2.8 | 5000 | 0.1464 | 0.1781 |
| 0.2222 | 2.94 | 5250 | 0.1399 | 0.1732 |
| 0.2081 | 3.08 | 5500 | 0.1450 | 0.1707 |
| 0.1963 | 3.22 | 5750 | 0.1337 | 0.1655 |
| 0.2107 | 3.36 | 6000 | 0.1344 | 0.1633 |
| 0.1866 | 3.5 | 6250 | 0.1339 | 0.1611 |
| 0.186 | 3.64 | 6500 | 0.1311 | 0.1563 |
| 0.1703 | 3.78 | 6750 | 0.1307 | 0.1537 |
| 0.1819 | 3.92 | 7000 | 0.1277 | 0.1555 |
| 0.176 | 4.06 | 7250 | 0.1280 | 0.1515 |
| 0.1837 | 4.2 | 7500 | 0.1249 | 0.1504 |
| 0.1678 | 4.34 | 7750 | 0.1236 | 0.1480 |
| 0.1624 | 4.48 | 8000 | 0.1194 | 0.1456 |
| 0.1631 | 4.62 | 8250 | 0.1215 | 0.1462 |
| 0.1736 | 4.76 | 8500 | 0.1192 | 0.1451 |
| 0.1752 | 4.9 | 8750 | 0.1206 | 0.1432 |
| 0.1578 | 5.04 | 9000 | 0.1151 | 0.1415 |
| 0.1537 | 5.18 | 9250 | 0.1185 | 0.1402 |
| 0.1771 | 5.33 | 9500 | 0.1165 | 0.1414 |
| 0.1481 | 5.47 | 9750 | 0.1152 | 0.1413 |
| 0.1509 | 5.61 | 10000 | 0.1152 | 0.1382 |
| 0.146 | 5.75 | 10250 | 0.1133 | 0.1385 |
| 0.1464 | 5.89 | 10500 | 0.1139 | 0.1371 |
| 0.1442 | 6.03 | 10750 | 0.1162 | 0.1365 |
| 0.128 | 6.17 | 11000 | 0.1147 | 0.1371 |
| 0.1381 | 6.31 | 11250 | 0.1148 | 0.1378 |
| 0.1343 | 6.45 | 11500 | 0.1113 | 0.1363 |
| 0.1325 | 6.59 | 11750 | 0.1134 | 0.1355 |
| 0.1442 | 6.73 | 12000 | 0.1142 | 0.1358 |
| 0.1286 | 6.87 | 12250 | 0.1133 | 0.1352 |
| 0.1349 | 7.01 | 12500 | 0.1129 | 0.1344 |
| 0.1338 | 7.15 | 12750 | 0.1131 | 0.1328 |
| 0.1403 | 7.29 | 13000 | 0.1124 | 0.1338 |
| 0.1314 | 7.43 | 13250 | 0.1141 | 0.1335 |
| 0.1283 | 7.57 | 13500 | 0.1124 | 0.1332 |
| 0.1347 | 7.71 | 13750 | 0.1107 | 0.1332 |
| 0.1195 | 7.85 | 14000 | 0.1119 | 0.1332 |
| 0.1326 | 7.99 | 14250 | 0.1120 | 0.1332 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
DuboiJ/finetuning-sentiment-model-3000-samples | 086462f963b4a793f6aa71ee120a7441e8bfef02 | 2022-05-25T13:48:07.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | DuboiJ | null | DuboiJ/finetuning-sentiment-model-3000-samples | 10 | null | transformers | 11,886 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8637873754152824
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3211
- Accuracy: 0.8633
- F1: 0.8638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
orzhan/rut5-base-detox-v2 | dd6cb8d1ea4d90179053779fe377139f94e23018 | 2022-06-11T07:18:47.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ru",
"transformers",
"PyTorch",
"Transformers",
"autotrain_compatible"
]
| text2text-generation | false | orzhan | null | orzhan/rut5-base-detox-v2 | 10 | null | transformers | 11,887 | ---
language:
- ru
tags:
- PyTorch
- Transformers
---
# rut5-base-detox-v2
Model was fine-tuned from sberbank-ai/ruT5-base on parallel detoxification corpus.
* Task: `text2text generation`
* Type: `encoder-decoder`
* Tokenizer: `bpe`
* Dict size: `32 101`
* Num Parameters: `222 M`
|
Hamda/test-1-finetuned-AraBART | f2b89a45a72e1c5f34484b86a1360eca3da9ccfb | 2022-05-29T12:21:03.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Hamda | null | Hamda/test-1-finetuned-AraBART | 10 | null | transformers | 11,888 | Entry not found |
DuskSigma/DialogGPTHomerSimpson | 71e93f4062332eb2b51f6fa8795971713d19fdc4 | 2022-05-31T00:56:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | DuskSigma | null | DuskSigma/DialogGPTHomerSimpson | 10 | 1 | transformers | 11,889 | ---
tags:
- conversational
---
#Homer Simpson DialogGPT Model |
roshnir/mBert-finetuned-mlqa-dev-hi | a1e4855a3d4f35d6ec107ab5ea18837f29c22fb6 | 2022-06-01T19:24:41.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | roshnir | null | roshnir/mBert-finetuned-mlqa-dev-hi | 10 | null | transformers | 11,890 | Entry not found |
kktoto/tiny_kt_punctuator | 5cb023fee2e131ad38241a6b0e562b6030c57a9a | 2022-06-02T02:04:30.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | kktoto | null | kktoto/tiny_kt_punctuator | 10 | null | transformers | 11,891 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tiny_kt_punctuator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny_kt_punctuator
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1424
- Precision: 0.6287
- Recall: 0.5781
- F1: 0.6023
- Accuracy: 0.9476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1621 | 1.0 | 5561 | 0.1508 | 0.6138 | 0.5359 | 0.5722 | 0.9450 |
| 0.1519 | 2.0 | 11122 | 0.1439 | 0.6279 | 0.5665 | 0.5956 | 0.9471 |
| 0.1496 | 3.0 | 16683 | 0.1424 | 0.6287 | 0.5781 | 0.6023 | 0.9476 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Jeevesh8/init_bert_ft_qqp-22 | cad4666abbe0793ea90ad26a0a72c38ea5678ad9 | 2022-06-02T12:40:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-22 | 10 | null | transformers | 11,892 | Entry not found |
Jeevesh8/init_bert_ft_qqp-59 | 96d09ede67b2103851c3990670febdba6ff1ba80 | 2022-06-02T12:39:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-59 | 10 | null | transformers | 11,893 | Entry not found |
Jeevesh8/init_bert_ft_qqp-48 | e8b3d062302ce6d91c785b14b848974ecb336fab | 2022-06-02T12:39:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-48 | 10 | null | transformers | 11,894 | Entry not found |
Jeevesh8/init_bert_ft_qqp-53 | 7f3a8e70c3a817a9793e402af7a6fa4cdc6d1cf4 | 2022-06-02T12:39:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-53 | 10 | null | transformers | 11,895 | Entry not found |
kaniku/xlm-roberta-large-indonesian-NER-finetuned-ner | faf781e70dd95b980d0b26de96afb84204c97357 | 2022-06-04T04:54:01.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | kaniku | null | kaniku/xlm-roberta-large-indonesian-NER-finetuned-ner | 10 | null | transformers | 11,896 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlm-roberta-large-indonesian-NER-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-indonesian-NER-finetuned-ner
This model is a fine-tuned version of [cahya/xlm-roberta-large-indonesian-NER](https://huggingface.co/cahya/xlm-roberta-large-indonesian-NER) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0489
- Precision: 0.9254
- Recall: 0.9394
- F1: 0.9324
- Accuracy: 0.9851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0496 | 1.0 | 1767 | 0.0489 | 0.9254 | 0.9394 | 0.9324 | 0.9851 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
momo/KcBERT-base_Hate_speech_Privacy_Detection | 7a7c379e0022175a28f7d416e517b834cbdd33d6 | 2022-06-04T16:20:27.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
]
| text-classification | false | momo | null | momo/KcBERT-base_Hate_speech_Privacy_Detection | 10 | null | transformers | 11,897 | ---
license: apache-2.0
---
|
mrcoombes/distilbert-wikipedia-pokemon | 5a244f3ecddeb268e2f6cf3730cae0b2543d18a6 | 2022-06-05T15:28:03.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | mrcoombes | null | mrcoombes/distilbert-wikipedia-pokemon | 10 | null | transformers | 11,898 | DistilBERT pokemon model (uncased)
This model is a distilled version of the BERT base model. It was introduced in this paper. The code for the distillation process can be found here. This model is uncased: it does not make a difference between english and English.
Model description
DistilBERT Wikipedia Pokemon model has been fine tuned for sequence classification using data from the notes field of these wikipedia tables. [such as this one](https://en.wikipedia.org/wiki/List_of_generation_III_Pok%C3%A9mon).
Given a pokedex entry as input, the model will return the most likely pokemon-type of the pokemon being described.
--
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained with three objectives:
Distillation loss: the model was trained to return the same probabilities as the BERT base model.
Masked language modeling (MLM): this is part of the original training loss of the BERT base model. When taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
Cosine embedding loss: the model was also trained to generate hidden states as close as possible as the BERT base model.
This way, the model learns the same inner representation of the English language than its teacher model, while being faster for inference or downstream tasks.
Intended uses & limitations
Text Classification.
How to use
You can use this model directly with a pipeline for masked language modeling:
from transformers import pipeline
classifier = pipeline('text-classification', model='mrcoombes/distilbert-wikipedia-pokemon')
classifier("This pokemon likes to attend aquatic parties on midnight rooftops. Their best friend is a dolphin.")
Metrics:
Accuracy 47%.
Limitations, Next Steps and Feedback:
This model could be improved by using over-sampling and under-sampling to reduce class imbalances. The accuracy of a dragon-type pokemon is lower than the accuracy of more well-reprepresented classes within the data. However, works well for well-represented classes.
Happy Classifying 🤗 |
juancavallotti/t5-grammar-corruption-edits | f3d9f787914edd4c0026ed4c70ec0894685889b9 | 2022-06-07T00:54:34.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | juancavallotti | null | juancavallotti/t5-grammar-corruption-edits | 10 | null | transformers | 11,899 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-grammar-corruption-edits
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-grammar-corruption-edits
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.