modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
bhavikardeshna/xlm-roberta-base-german | 530e8c3dd543800078c3dfbfcde30c883480f258 | 2021-12-21T11:40:35.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"arxiv:2112.09866",
"transformers",
"autotrain_compatible"
]
| question-answering | false | bhavikardeshna | null | bhavikardeshna/xlm-roberta-base-german | 15 | null | transformers | 9,500 | # BibTeX entry and citation info
```
@misc{pandya2021cascading,
title={Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages},
author={Hariom A. Pandya and Bhavik Ardeshna and Dr. Brijesh S. Bhatt},
year={2021},
eprint={2112.09866},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
bs-modeling-metadata/website_metadata_exp_1_model_100k_checkpoint | 875249e5fc9357a93f6eab4461688b3ac18d40dc | 2021-10-07T13:32:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | bs-modeling-metadata | null | bs-modeling-metadata/website_metadata_exp_1_model_100k_checkpoint | 15 | 1 | transformers | 9,501 | Entry not found |
bsingh/roberta_goEmotion | af498dcbab4ef49f7163cac455aa0d34ae7d25d8 | 2021-10-11T00:26:09.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:go_emotions",
"transformers",
"emotions",
"license:mit"
]
| text-classification | false | bsingh | null | bsingh/roberta_goEmotion | 15 | null | transformers | 9,502 | ---
language: en
tags:
- text-classification
- pytorch
- roberta
- emotions
datasets:
- go_emotions
license: mit
widget:
- text: "I am not feeling well today."
---
## This model is trained for GoEmotions dataset which contains labeled 58k Reddit comments with 28 emotions
- admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise + neutral
## Training details:
- The training script is provided here: https://github.com/bsinghpratap/roberta_train_goEmotion
- Please feel free to start an issue in the repo if you have trouble running the model and I would try to respond as soon as possible.
- The model works well on most of the emotions except: 'desire', 'disgust', 'embarrassment', 'excitement', 'fear', 'grief', 'nervousness', 'pride', 'relief', 'remorse', 'surprise']
- I'll try to fine-tune the model further and update here if RoBERTa achieves a better performance.
- Each text datapoint can have more than 1 label. Most of the training set had 1 label: Counter({1: 36308, 2: 6541, 3: 532, 4: 28, 5: 1}). So currently I just used the first label for each of the datapoint. Not ideal but it does a decent job.
## Model Performance
============================================================<br>
Emotion: admiration<br>
============================================================<br>
GoEmotions Paper: 0.65<br>
RoBERTa: 0.62<br>
Support: 504<br>
============================================================<br>
Emotion: amusement<br>
============================================================<br>
GoEmotions Paper: 0.80<br>
RoBERTa: 0.78<br>
Support: 252<br>
============================================================<br>
Emotion: anger<br>
============================================================<br>
GoEmotions Paper: 0.47<br>
RoBERTa: 0.44<br>
Support: 197<br>
============================================================<br>
Emotion: annoyance<br>
============================================================<br>
GoEmotions Paper: 0.34<br>
RoBERTa: 0.22<br>
Support: 286<br>
============================================================<br>
Emotion: approval<br>
============================================================<br>
GoEmotions Paper: 0.36<br>
RoBERTa: 0.31<br>
Support: 318<br>
============================================================<br>
Emotion: caring<br>
============================================================<br>
GoEmotions Paper: 0.39<br>
RoBERTa: 0.24<br>
Support: 114<br>
============================================================<br>
Emotion: confusion<br>
============================================================<br>
GoEmotions Paper: 0.37<br>
RoBERTa: 0.29<br>
Support: 139<br>
============================================================<br>
Emotion: curiosity<br>
============================================================<br>
GoEmotions Paper: 0.54<br>
RoBERTa: 0.48<br>
Support: 233<br>
============================================================<br>
Emotion: disappointment<br>
============================================================<br>
GoEmotions Paper: 0.28<br>
RoBERTa: 0.18<br>
Support: 127<br>
============================================================<br>
Emotion: disapproval<br>
============================================================<br>
GoEmotions Paper: 0.39<br>
RoBERTa: 0.26<br>
Support: 220<br>
============================================================<br>
Emotion: gratitude<br>
============================================================<br>
GoEmotions Paper: 0.86<br>
RoBERTa: 0.84<br>
Support: 288<br>
============================================================<br>
Emotion: joy<br>
============================================================<br>
GoEmotions Paper: 0.51<br>
RoBERTa: 0.47<br>
Support: 116<br>
============================================================<br>
Emotion: love<br>
============================================================<br>
GoEmotions Paper: 0.78<br>
RoBERTa: 0.68<br>
Support: 169<br>
============================================================<br>
Emotion: neutral<br>
============================================================<br>
GoEmotions Paper: 0.68<br>
RoBERTa: 0.61<br>
Support: 1606<br>
============================================================<br>
Emotion: optimism<br>
============================================================<br>
GoEmotions Paper: 0.51<br>
RoBERTa: 0.52<br>
Support: 120<br>
============================================================<br>
Emotion: realization<br>
============================================================<br>
GoEmotions Paper: 0.21<br>
RoBERTa: 0.15<br>
Support: 109<br>
============================================================<br>
Emotion: sadness<br>
============================================================<br>
GoEmotions Paper: 0.49<br>
RoBERTa: 0.42<br>
Support: 108 |
cardiffnlp/bertweet-base-stance-atheism | 8a4275b426ee8d4136b36ed826bd3feb2dc41f3c | 2021-05-20T14:53:17.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | cardiffnlp | null | cardiffnlp/bertweet-base-stance-atheism | 15 | null | transformers | 9,503 | |
chrommium/rubert-base-cased-sentence-finetuned-headlines_X | 3ff3429c5539d43e2a02328421cf8204c67695e4 | 2021-09-16T00:34:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | chrommium | null | chrommium/rubert-base-cased-sentence-finetuned-headlines_X | 15 | null | transformers | 9,504 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: rubert-base-cased-sentence-finetuned-headlines_X
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.952
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-cased-sentence-finetuned-headlines_X
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2535
- Accuracy: 0.952
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 157 | 0.2759 | 0.912 |
| No log | 2.0 | 314 | 0.2538 | 0.936 |
| No log | 3.0 | 471 | 0.2556 | 0.945 |
| 0.1908 | 4.0 | 628 | 0.2601 | 0.95 |
| 0.1908 | 5.0 | 785 | 0.2535 | 0.952 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
classla/bcms-bertic-frenk-hate | f41364e4a917e75feddf03e7525b9aa001650aca | 2022-06-01T09:31:46.000Z | [
"pytorch",
"bert",
"text-classification",
"hr",
"arxiv:1906.02045",
"transformers",
"hate-speech"
]
| text-classification | false | classla | null | classla/bcms-bertic-frenk-hate | 15 | null | transformers | 9,505 | ---
language: "hr"
tags:
- text-classification
- hate-speech
widget:
- text: "Potpredsjednik Vlade i ministar branitelja Tomo Medved komentirao je Vladine planove za zakonsku zabranu pozdrava 'za dom spremni'."
---
# bcms-bertic-frenk-hate
Text classification model based on [`classla/bcms-bertic`](https://huggingface.co/classla/bcms-bertic) and fine-tuned on the [FRENK dataset](https://www.clarin.si/repository/xmlui/handle/11356/1433) comprising of LGBT and migrant hatespeech. Only the Croatian subset of the data was used for fine-tuning and the dataset has been relabeled for binary classification (offensive or acceptable).
## Fine-tuning hyperparameters
Fine-tuning was performed with `simpletransformers`. Beforehand a brief hyperparameter optimisation was performed and the presumed optimal hyperparameters are:
```python
model_args = {
"num_train_epochs": 12,
"learning_rate": 1e-5,
"train_batch_size": 74}
```
## Performance
The same pipeline was run with two other transformer models and `fasttext` for comparison. Accuracy and macro F1 score were recorded for each of the 6 fine-tuning sessions and post festum analyzed.
| model | average accuracy | average macro F1 |
|----------------------------|------------------|------------------|
| bcms-bertic-frenk-hate | 0.8313 | 0.8219 |
| EMBEDDIA/crosloengual-bert | 0.8054 | 0.796 |
| xlm-roberta-base | 0.7175 | 0.7049 |
| fasttext | 0.771 | 0.754 |
From recorded accuracies and macro F1 scores p-values were also calculated:
Comparison with `crosloengual-bert`:
| test | accuracy p-value | macro F1 p-value |
|----------------|------------------|------------------|
| Wilcoxon | 0.00781 | 0.00781 |
| Mann Whithney | 0.00108 | 0.00108 |
| Student t-test | 2.43e-10 | 1.27e-10 |
Comparison with `xlm-roberta-base`:
| test | accuracy p-value | macro F1 p-value |
|----------------|------------------|------------------|
| Wilcoxon | 0.00781 | 0.00781 |
| Mann Whithney | 0.00107 | 0.00108 |
| Student t-test | 4.83e-11 | 5.61e-11 |
## Use examples
```python
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
"bert", "5roop/bcms-bertic-frenk-hate", use_cuda=True,
)
predictions, logit_output = model.predict(['Ne odbacujem da će RH primiti još migranata iz Afganistana, no neće biti novog vala',
"Potpredsjednik Vlade i ministar branitelja Tomo Medved komentirao je Vladine planove za zakonsku zabranu pozdrava 'za dom spremni' "])
predictions
### Output:
### array([0, 0])
```
## Citation
If you use the model, please cite the following paper on which the original model is based:
```
@inproceedings{ljubesic-lauc-2021-bertic,
title = "{BERT}i{\'c} - The Transformer Language Model for {B}osnian, {C}roatian, {M}ontenegrin and {S}erbian",
author = "Ljube{\v{s}}i{\'c}, Nikola and Lauc, Davor",
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bsnlp-1.5",
pages = "37--42",
}
```
and the dataset used for fine-tuning:
```
@misc{ljubešić2019frenk,
title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English},
author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec},
year={2019},
eprint={1906.02045},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/1906.02045}
}
```
|
crystina-z/monoELECTRA_LCE_nneg31 | 47923dbe2fb0dedea1c1572940b1289806838a92 | 2022-02-11T18:02:52.000Z | [
"pytorch",
"tf",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | crystina-z | null | crystina-z/monoELECTRA_LCE_nneg31 | 15 | null | transformers | 9,506 | Entry not found |
dbmdz/electra-base-german-europeana-cased-generator | 195c6427c576e68a7c2a97de2e20421fc506c58c | 2020-07-26T00:53:55.000Z | [
"pytorch",
"tf",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | dbmdz | null | dbmdz/electra-base-german-europeana-cased-generator | 15 | null | transformers | 9,507 | Entry not found |
fabriceyhc/bert-base-uncased-yahoo_answers_topics | 968176fc24a2eb73cac26ab4312d8b22da98486a | 2021-09-21T00:54:22.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:yahoo_answers_topics",
"transformers",
"generated_from_trainer",
"sibyl",
"license:apache-2.0",
"model-index"
]
| text-classification | false | fabriceyhc | null | fabriceyhc/bert-base-uncased-yahoo_answers_topics | 15 | 1 | transformers | 9,508 | ---
license: apache-2.0
tags:
- generated_from_trainer
- sibyl
datasets:
- yahoo_answers_topics
metrics:
- accuracy
model-index:
- name: bert-base-uncased-yahoo_answers_topics
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yahoo_answers_topics
type: yahoo_answers_topics
args: yahoo_answers_topics
metrics:
- name: Accuracy
type: accuracy
value: 0.7499166666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-yahoo_answers_topics
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the yahoo_answers_topics dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8092
- Accuracy: 0.7499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 86625
- training_steps: 866250
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.162 | 0.01 | 2000 | 1.7444 | 0.5681 |
| 1.3126 | 0.02 | 4000 | 1.0081 | 0.7054 |
| 0.9592 | 0.03 | 6000 | 0.9021 | 0.7234 |
| 0.8903 | 0.05 | 8000 | 0.8827 | 0.7276 |
| 0.8685 | 0.06 | 10000 | 0.8540 | 0.7341 |
| 0.8422 | 0.07 | 12000 | 0.8547 | 0.7365 |
| 0.8535 | 0.08 | 14000 | 0.8264 | 0.7372 |
| 0.8178 | 0.09 | 16000 | 0.8331 | 0.7389 |
| 0.8325 | 0.1 | 18000 | 0.8242 | 0.7411 |
| 0.8181 | 0.12 | 20000 | 0.8356 | 0.7437 |
| 0.8171 | 0.13 | 22000 | 0.8090 | 0.7451 |
| 0.8092 | 0.14 | 24000 | 0.8469 | 0.7392 |
| 0.8057 | 0.15 | 26000 | 0.8185 | 0.7478 |
| 0.8085 | 0.16 | 28000 | 0.8090 | 0.7467 |
| 0.8229 | 0.17 | 30000 | 0.8225 | 0.7417 |
| 0.8151 | 0.18 | 32000 | 0.8262 | 0.7419 |
| 0.81 | 0.2 | 34000 | 0.8149 | 0.7383 |
| 0.8073 | 0.21 | 36000 | 0.8225 | 0.7441 |
| 0.816 | 0.22 | 38000 | 0.8037 | 0.744 |
| 0.8217 | 0.23 | 40000 | 0.8409 | 0.743 |
| 0.82 | 0.24 | 42000 | 0.8286 | 0.7385 |
| 0.8101 | 0.25 | 44000 | 0.8282 | 0.7413 |
| 0.8254 | 0.27 | 46000 | 0.8170 | 0.7414 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1
- Datasets 1.6.1
- Tokenizers 0.10.3
|
fbaigt/procbert | 20814e122765866e213447ebe2618d2f0b90cbf1 | 2021-11-08T15:08:01.000Z | [
"pytorch",
"bert",
"feature-extraction",
"en",
"dataset:pubmed",
"dataset:chemical patent",
"dataset:cooking recipe",
"arxiv:2109.04711",
"transformers"
]
| feature-extraction | false | fbaigt | null | fbaigt/procbert | 15 | 1 | transformers | 9,509 | ---
language:
- en
datasets:
- pubmed
- chemical patent
- cooking recipe
---
## ProcBERT
ProcBERT is a pre-trained language model specifically for procedural text. It was pre-trained on a large-scale procedural corpus (PubMed articles/chemical patents/cooking recipes) containing over 12B tokens and shows great performance on downstream tasks. More details can be found in the following [paper](https://arxiv.org/abs/2109.04711):
```
@inproceedings{bai-etal-2021-pre,
title = "Pre-train or Annotate? Domain Adaptation with a Constrained Budget",
author = "Bai, Fan and
Ritter, Alan and
Xu, Wei",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
}
```
## Usage
```
from transformers import *
tokenizer = AutoTokenizer.from_pretrained("fbaigt/procbert")
model = AutoModelForTokenClassification.from_pretrained("fbaigt/procbert")
```
More usage details can be found [here](https://github.com/bflashcp3f/ProcBERT). |
federicopascual/finetune-sentiment-analysis-model-3000-samples | 595ae6575f96bc971fd033b3560a98e80ede9517 | 2021-12-30T19:29:48.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | federicopascual | null | federicopascual/finetune-sentiment-analysis-model-3000-samples | 15 | null | transformers | 9,510 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetune-sentiment-analysis-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8866666666666667
- name: F1
type: f1
value: 0.8944099378881988
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune-sentiment-analysis-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4558
- Accuracy: 0.8867
- F1: 0.8944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
figurative-nlp/t5-figurative-paraphrase | 4c382b695540ace5fa8ce647e3fcd67a372a93f8 | 2022-02-17T12:21:53.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | figurative-nlp | null | figurative-nlp/t5-figurative-paraphrase | 15 | 2 | transformers | 9,511 | This model can convert the figurative/metaphorical expression to the literal expression. Below is the usage of our model:
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("figurative-nlp/t5-figurative-paraphrase")
model = AutoModelForSeq2SeqLM.from_pretrained("figurative-nlp/t5-figurative-paraphrase")
input_ids = tokenizer(
"paraphrase the sentence : i will talk this story to you from A to Z", return_tensors="pt"
).input_ids # Batch size 1
outputs = model.generate(input_ids,num_beams = 5)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
#result : i will talk this story to you from beginning to end..
For example:
**Input**: He is always bang on when he makes a speech.
**Output**: He is always presice when he makes a speech.
**Input**: He always buy what he said.
**Output**: He always agree with what he said.
**Input**: Your team will be done like dinner if they play against the all-star team.
**Output**: Your team will be defeated if they play against the all-star team. (the one is not particularly accurate)
Note: the figurative language here includes metaphor, idiom and simile. We don't guarantee that the results generated results are satisfactory to you. We are trying to improve the effect of the model. |
fnlp/elasticbert-large | 0a5689cea93ed0bf88c87bcd623e0de0f98516e2 | 2021-10-28T11:05:49.000Z | [
"pytorch",
"elasticbert",
"fill-mask",
"arxiv:2110.07038",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | fnlp | null | fnlp/elasticbert-large | 15 | 2 | transformers | 9,512 | # ElasticBERT-LARGE
## Model description
This is an implementation of the `large` version of ElasticBERT.
[**Towards Efficient NLP: A Standard Evaluation and A Strong Baseline**](https://arxiv.org/pdf/2110.07038.pdf)
Xiangyang Liu, Tianxiang Sun, Junliang He, Lingling Wu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, Xipeng Qiu
## Code link
[**fastnlp/elasticbert**](https://github.com/fastnlp/ElasticBERT)
## Usage
```python
>>> from transformers import BertTokenizer as ElasticBertTokenizer
>>> from models.configuration_elasticbert import ElasticBertConfig
>>> from models.modeling_elasticbert import ElasticBertForSequenceClassification
>>> num_output_layers = 1
>>> config = ElasticBertConfig.from_pretrained('fnlp/elasticbert-large', num_output_layers=num_output_layers )
>>> tokenizer = ElasticBertTokenizer.from_pretrained('fnlp/elasticbert-large')
>>> model = ElasticBertForSequenceClassification.from_pretrained('fnlp/elasticbert-large', config=config)
>>> input_ids = tokenizer.encode('The actors are fantastic .', return_tensors='pt')
>>> outputs = model(input_ids)
```
## Citation
```bibtex
@article{liu2021elasticbert,
author = {Xiangyang Liu and
Tianxiang Sun and
Junliang He and
Lingling Wu and
Xinyu Zhang and
Hao Jiang and
Zhao Cao and
Xuanjing Huang and
Xipeng Qiu},
title = {Towards Efficient {NLP:} {A} Standard Evaluation and {A} Strong Baseline},
journal = {CoRR},
volume = {abs/2110.07038},
year = {2021},
url = {https://arxiv.org/abs/2110.07038},
eprinttype = {arXiv},
eprint = {2110.07038},
timestamp = {Fri, 22 Oct 2021 13:33:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2110-07038.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
geekfeed/gpt2_ja | 6c297f7e58fc6e7c75d654941380620cd3710660 | 2021-05-21T16:11:52.000Z | [
"pytorch",
"jax",
"gpt2",
"feature-extraction",
"transformers"
]
| feature-extraction | false | geekfeed | null | geekfeed/gpt2_ja | 15 | null | transformers | 9,513 | hello
|
ghadeermobasher/BC2GM-Gene-Modified_scibert_scivocab_cased | cf11c70aceb0994ec32ebedfa0a2e878043b12f9 | 2022-01-23T19:55:04.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC2GM-Gene-Modified_scibert_scivocab_cased | 15 | null | transformers | 9,514 | Entry not found |
ghadeermobasher/BC2GM-Gene_ImbalancedPubMedBERT | 2574cb8b0ee0fe4af0e9b272c51f855bfa3c1b01 | 2022-01-22T01:44:25.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC2GM-Gene_ImbalancedPubMedBERT | 15 | null | transformers | 9,515 | Entry not found |
ghadeermobasher/BC4-Original-biobert-v1.1 | ab10a139dbc7b0c0c9bc6a4558a206e5d73fb3e5 | 2022-02-24T14:45:29.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4-Original-biobert-v1.1 | 15 | null | transformers | 9,516 | Entry not found |
ghadeermobasher/BC4-Original-bluebert_pubmed_uncased_L-12_H-768_A-12 | cbceb41840ab0b1b4d9cd8d4cc7a19ac477fae8d | 2022-02-24T14:22:37.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4-Original-bluebert_pubmed_uncased_L-12_H-768_A-12 | 15 | null | transformers | 9,517 | Entry not found |
ghadeermobasher/BC4-Original-scibert_scivocab_uncased | 1bb6e36961e4a831a996c806637ac891282bf3e9 | 2022-02-24T14:28:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4-Original-scibert_scivocab_uncased | 15 | null | transformers | 9,518 | Entry not found |
ghadeermobasher/BC5CDR-Chemical-imbalanced-bluebert_pubmed_uncased_L-12_H-768_A-12_latest | 17f9443fd2d369763bf7c64b4eb9206803f0df27 | 2022-02-21T23:07:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chemical-imbalanced-bluebert_pubmed_uncased_L-12_H-768_A-12_latest | 15 | null | transformers | 9,519 | Entry not found |
glasses/resnet18 | a15b2ef76c4e01cc6b3f4518f56c1c98722d6793 | 2021-11-30T20:06:28.000Z | [
"pytorch",
"dataset:imagenet",
"arxiv:1512.03385",
"arxiv:1812.01187",
"transformers",
"image-classification",
"license:apache-2.0"
]
| image-classification | false | glasses | null | glasses/resnet18 | 15 | null | transformers | 9,520 | ---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# resnet18
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
gpssohi/distilbart-qgen-3-3 | 66e9d4f41a1c4bf55bdab0a9eb904476542d5d06 | 2022-01-12T08:29:26.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:squad",
"transformers",
"question-generation",
"summarization",
"license:apache-2.0",
"autotrain_compatible"
]
| summarization | false | gpssohi | null | gpssohi/distilbart-qgen-3-3 | 15 | 2 | transformers | 9,521 | ---
language: en
tags:
- question-generation
- summarization
license: apache-2.0
datasets:
- squad
---
# Introduction
This model checkpoint is obtained by first fine-tuning the sshleifer/distilbart-cnn-6-6 summarization checkpoint on the SQuAD dataset. After this, the 6-6 fine-tuned model is distilled down to a 3-3 model which gives us the final checkpoint. [GitHub Link for training scripts.](https://github.com/darth-c0d3r/bart-question-generation)
# Usage
The input format is as follows: `[answer] <s> [passage]`. The model will predict the question that corresponds to the answer from the passage.
# Plot

# Dataset
The goal of Question Generation is to generate a valid and fluent question according to a given passage and the target answer. Hence, the input to the model will be a passage context and an answer, and the output / target will be the question for the given answer. Question Generation can be used in many scenarios, such as automatic tutoring systems, improving the performance of Question Answering models and enabling chat-bots to lead a conversation. The final dataset is created by taking the union of the following Question Answering Datasets. The dataset must have the following three columns: context, question, answer.
## [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/)
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowd-workers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. We use the SQuAD 1.1 variant which does not have unanswerable questions. So, every question will have a corresponding answer and vice-versa.
### Preprocessing
The first step is to remove questions which don't have answers. After that, we split the train set into Train and Eval sets and treat the dev set as the test set.
### Stats
**Original Dataset**
| Split | Num Docs | Num Contexts | Ques w/ Ans | Ques w/o Ans | Num Unique Ans |
| ----- | -------- | ------------ | ----------- | ------------ | -------------- |
| Train | 442 | 19035 | 86821 | 43498 | 86821 |
| Dev | 35 | 1204 | 5928 | 5945 | 10279 |
**After Preprocessing**
| Split | Num Rows | Context | Answer | Question |
| ----- | -------- | ---------- | ------ | -------- |
| Train | 80995 | 653,120,20 | 43,3,1 | 40,10,1 |
| Eval | 5826 | 445,123,67 | 28,3,1 | 29,10,3 |
| Test | 10297 | 629,129,25 | 29,4,1 | 31,10,3 |
The numbers in the columns indicate max, avg, min number of words.
|
gpssohi/distilbart-qgen-6-6 | 18d85ee5d5482d7af7b5f719048f4bc641c3a5ff | 2022-01-12T08:29:13.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:squad",
"transformers",
"summarization",
"question-generation",
"license:apache-2.0",
"autotrain_compatible"
]
| summarization | false | gpssohi | null | gpssohi/distilbart-qgen-6-6 | 15 | 1 | transformers | 9,522 | ---
language: en
tags:
- summarization
- question-generation
license: apache-2.0
datasets:
- squad
---
# Introduction
This model checkpoint is obtained by fine-tuning the `sshleifer/distilbart-cnn-6-6` summarization checkpoint on the SQuAD dataset. [GitHub Link for training scripts.](https://github.com/darth-c0d3r/bart-question-generation)
# Usage
The input format is as follows: `[answer] <s> [passage]`. The model will predict the question that corresponds to the answer from the passage.
# Plot

# Dataset
The goal of Question Generation is to generate a valid and fluent question according to a given passage and the target answer. Hence, the input to the model will be a passage context and an answer, and the output / target will be the question for the given answer. Question Generation can be used in many scenarios, such as automatic tutoring systems, improving the performance of Question Answering models and enabling chat-bots to lead a conversation. The final dataset is created by taking the union of the following Question Answering Datasets. The dataset must have the following three columns: context, question, answer.
## [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/)
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowd-workers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. We use the SQuAD 1.1 variant which does not have unanswerable questions. So, every question will have a corresponding answer and vice-versa.
### Preprocessing
The first step is to remove questions that don't have answers. After that, we split the train set into Train and Eval sets and treat the dev set as the test set.
### Stats
**Original Dataset**
| Split | Num Docs | Num Contexts | Ques w/ Ans | Ques w/o Ans | Num Unique Ans |
| ----- | -------- | ------------ | ----------- | ------------ | -------------- |
| Train | 442 | 19035 | 86821 | 43498 | 86821 |
| Dev | 35 | 1204 | 5928 | 5945 | 10279 |
**After Preprocessing**
| Split | Num Rows | Context | Answer | Question |
| ----- | -------- | ---------- | ------ | -------- |
| Train | 80995 | 653,120,20 | 43,3,1 | 40,10,1 |
| Eval | 5826 | 445,123,67 | 28,3,1 | 29,10,3 |
| Test | 10297 | 629,129,25 | 29,4,1 | 31,10,3 |
The numbers in the columns indicate max, avg, min number of words.
|
gwkim22/domain_base2_disc | 253c926654ffcae55fe363bca751474d03b90ec7 | 2021-07-19T01:56:14.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
]
| null | false | gwkim22 | null | gwkim22/domain_base2_disc | 15 | 1 | transformers | 9,523 | "domain_base2_disc_0719"
|
hamzaMM/questionClassifier | d1d326c6965fb4f91070df4356562dade3b37364 | 2021-12-02T20:08:26.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | hamzaMM | null | hamzaMM/questionClassifier | 15 | 2 | transformers | 9,524 | Entry not found |
hrdipto/wav2vec2-xls-r-300m-bangla-command-generated-data-finetune | 6361d8dfe95c21a0fe20389a195e9de9aab1de02 | 2022-02-14T08:58:20.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | hrdipto | null | hrdipto/wav2vec2-xls-r-300m-bangla-command-generated-data-finetune | 15 | null | transformers | 9,525 | ---
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-300m-bangla-command-generated-data-finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-bangla-command-generated-data-finetune
This model is a fine-tuned version of [hrdipto/wav2vec2-xls-r-300m-bangla-command-data](https://huggingface.co/hrdipto/wav2vec2-xls-r-300m-bangla-command-data) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0099
- eval_wer: 0.0208
- eval_runtime: 2.5526
- eval_samples_per_second: 75.217
- eval_steps_per_second: 9.402
- epoch: 71.43
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingartists/kanye-west | ef0b90df2f597af17783d7ae3477a01de520f35c | 2022-05-05T00:27:21.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/kanye-west",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
]
| text-generation | false | huggingartists | null | huggingartists/kanye-west | 15 | null | transformers | 9,526 | ---
language: en
datasets:
- huggingartists/kanye-west
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/54520386ec39aca6408c7e2c156ae10a.399x399x1.png')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Kanye West</div>
<a href="https://genius.com/artists/kanye-west">
<div style="text-align: center; font-size: 14px;">@kanye-west</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Kanye West.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/kanye-west).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/kanye-west")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/hl7afoso/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Kanye West's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/28dw8m5v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/28dw8m5v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/kanye-west')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/kanye-west")
model = AutoModelWithLMHead.from_pretrained("huggingartists/kanye-west")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingtweets/amazon | 7bc08510372ae3b213814f77f110ae1f3138dd3a | 2021-05-21T18:33:06.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/amazon | 15 | null | transformers | 9,527 | ---
language: en
thumbnail: https://www.huggingtweets.com/amazon/1609713999453/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/949070360103698432/kXSiPeTk_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Amazon 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@amazon bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@amazon's tweets](https://twitter.com/amazon).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3242</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>40</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>60</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>3142</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1fd78mc2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @amazon's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/76pxw0n0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/76pxw0n0/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/amazon'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/deepleffen | 9f6a7a1f1bb73ef33f7cead6ee0b72ba37411d4f | 2022-06-03T17:34:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/deepleffen | 15 | null | transformers | 9,528 | ---
language: en
thumbnail: http://www.huggingtweets.com/deepleffen/1654277690184/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1241879678455078914/e2EdZIrr_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Deep Leffen Bot</div>
<div style="text-align: center; font-size: 14px;">@deepleffen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Deep Leffen Bot.
| Data | Deep Leffen Bot |
| --- | --- |
| Tweets downloaded | 589 |
| Retweets | 14 |
| Short tweets | 27 |
| Tweets kept | 548 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1p32tock/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deepleffen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/imjjixah) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/imjjixah/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deepleffen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/nvidia | cceee848844258789df63f96b72b676fade2a4aa | 2021-05-22T17:00:36.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/nvidia | 15 | null | transformers | 9,529 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1145524454170062848/U4lxVYEw_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">NVIDIA 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@nvidia bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@nvidia's tweets](https://twitter.com/nvidia).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3222</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1876</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>18</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1328</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/35e4bboc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nvidia's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/2t7z7a45) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/2t7z7a45/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/nvidia'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
imzachjohnson/autonlp-spinner-check-16492731 | 3e96dbddaf4b1b2c760fbf196391ac77ecfc7890 | 2021-10-11T00:02:11.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:imzachjohnson/autonlp-data-spinner-check",
"transformers",
"autonlp"
]
| text-classification | false | imzachjohnson | null | imzachjohnson/autonlp-spinner-check-16492731 | 15 | null | transformers | 9,530 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- imzachjohnson/autonlp-data-spinner-check
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 16492731
## Validation Metrics
- Loss: 0.21610039472579956
- Accuracy: 0.9155366722657816
- Precision: 0.9530714194995978
- Recall: 0.944871149164778
- AUC: 0.9553238723676906
- F1: 0.9489535692456846
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/imzachjohnson/autonlp-spinner-check-16492731
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("imzachjohnson/autonlp-spinner-check-16492731", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("imzachjohnson/autonlp-spinner-check-16492731", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
jaimin/plagiarism_checker | 63fdab21cd1ee8fa220533eeb00c77238156728f | 2021-08-20T05:44:24.000Z | [
"pytorch",
"longformer",
"text-classification",
"transformers"
]
| text-classification | false | jaimin | null | jaimin/plagiarism_checker | 15 | null | transformers | 9,531 | "hello"
|
kanishka/GlossBERT | 0cc3b83af5496e27ebcc95ef0cf37ea0a9281a7a | 2021-09-22T08:54:41.000Z | [
"pytorch",
"bert",
"en",
"dataset:SemCor3.0",
"arxiv:1908.07245",
"transformers",
"glossbert",
"license:mit"
]
| null | false | kanishka | null | kanishka/GlossBERT | 15 | null | transformers | 9,532 | ---
language: en
tags:
- glossbert
license: mit
datasets:
- SemCor3.0
---
## GlossBERT
A BERT-based model fine-tuned on SemCor 3.0 to perform word-sense-disambiguation by leveraging gloss information. This model is the research output of the paper titled: '[GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge](https://arxiv.org/pdf/1908.07245.pdf)'
Disclaimer: This model was built and trained by a group of researchers different than the repository's author. The original model code can be found on github: https://github.com/HSLCY/GlossBERT
## Usage
The following code loads GlossBERT:
```py
from transformers import AutoTokenizer, BertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('kanishka/GlossBERT')
model = BertForSequenceClassification.from_pretrained('kanishka/GlossBERT')
```
## Citation
If you use this model in any of your projects, please cite the original authors using the following bibtex:
```
@inproceedings{huang-etal-2019-glossbert,
title = "{G}loss{BERT}: {BERT} for Word Sense Disambiguation with Gloss Knowledge",
author = "Huang, Luyao and
Sun, Chi and
Qiu, Xipeng and
Huang, Xuanjing",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1355",
doi = "10.18653/v1/D19-1355",
pages = "3507--3512"
}
``` |
keshan/sinhala-roberta-oscar | 655873a1b237c7e09c424d0a55bb9fb05456248e | 2021-07-14T06:28:47.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"si",
"dataset:oscar",
"arxiv:1907.11692",
"transformers",
"oscar",
"Sinhala",
"autotrain_compatible"
]
| fill-mask | false | keshan | null | keshan/sinhala-roberta-oscar | 15 | null | transformers | 9,533 | ---
language: si
tags:
- oscar
- Sinhala
- roberta
- fill-mask
widget:
- text: "මම සිංහල භාෂාව <mask>"
datasets:
- oscar
---
### Overview
This is a slightly smaller model trained on [OSCAR](https://oscar-corpus.com/) Sinhala dedup dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks.
## Model Specification
The model chosen for training is [Roberta](https://arxiv.org/abs/1907.11692) with the following specifications:
1. vocab_size=50265
2. max_position_embeddings=514
3. num_attention_heads=12
4. num_hidden_layers=12
5. type_vocab_size=1
## How to Use
You can use this model directly with a pipeline for masked language modeling:
```py
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
model = AutoModelWithLMHead.from_pretrained("keshan/sinhala-roberta-oscar")
tokenizer = AutoTokenizer.from_pretrained("keshan/sinhala-roberta-oscar")
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
fill_mask("මම ගෙදර <mask>.")
```
|
kurianbenoy/distilbert-base-uncased-finetuned-sst-2-english-finetuned-imdb | 465c2a001ccd95ff2faee805ffe909ef79fdf366 | 2022-02-21T11:55:41.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | kurianbenoy | null | kurianbenoy/distilbert-base-uncased-finetuned-sst-2-english-finetuned-imdb | 15 | null | transformers | 9,534 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst-2-english-finetuned-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93032
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst-2-english-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2165
- Accuracy: 0.9303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2749 | 1.0 | 3125 | 0.2165 | 0.9303 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
leonardvorbeck/wav2vec2-large-robust-SB300 | e630e8662f2d52c6be25ccd3e95ba417f7c8b21b | 2021-08-26T12:22:18.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:libri_light",
"dataset:common_voice",
"dataset:switchboard",
"dataset:fisher",
"arxiv:2104.01027",
"transformers",
"speech",
"CTC",
"Attention",
"license:apache-2.0"
]
| automatic-speech-recognition | false | leonardvorbeck | null | leonardvorbeck/wav2vec2-large-robust-SB300 | 15 | 1 | transformers | 9,535 | ---
language: en
datasets:
- libri_light
- common_voice
- switchboard
- fisher
tags:
- speech
- automatic-speech-recognition
- CTC
- Attention
- wav2vec2
license: apache-2.0
---
# Wav2Vec2-Large-Robust - Finetuned on Switchboard (300 hours)
## Note : Model has not been initialized. If you want to use it without further finetuning, do a forward pass first to recalculate the normalized weights of the positional convolutional layer :
```ipython
with torch.no_grad():
model(torch.randn((1,300_000)))
```
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained on 16kHz sampled speech audio.
Speech datasets from multiple domains were used to pretrain the model:
- [Libri-Light](https://github.com/facebookresearch/libri-light): open-source audio books from the LibriVox project; clean, read-out audio data
- [CommonVoice](https://huggingface.co/datasets/common_voice): crowd-source collected audio data; read-out text snippets
- [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62): telephone speech corpus; noisy telephone data
- [Fisher](https://catalog.ldc.upenn.edu/LDC2004T19): conversational telephone speech; noisy telephone data
When using the model make sure that your speech input is also sampled at 16Khz.
Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information.
[Paper Robust Wav2Vec2](https://arxiv.org/abs/2104.01027)
Authors: Wei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, Michael Auli
**Abstract**
Self-supervised learning of speech representations has been a very active research area but most work is focused on a single domain such as read audio books for which there exist large quantities of labeled and unlabeled data. In this paper, we explore more general setups where the domain of the unlabeled data for pre-training data differs from the domain of the labeled data for fine-tuning, which in turn may differ from the test data domain. Our experiments show that using target domain data during pre-training leads to large performance improvements across a variety of setups. On a large-scale competitive setup, we show that pre-training on unlabeled in-domain data reduces the gap between models trained on in-domain and out-of-domain labeled data by 66%-73%. This has obvious practical implications since it is much easier to obtain unlabeled target domain data than labeled data. Moreover, we find that pre-training on multiple domains improves generalization performance on domains not seen during training. Code and models will be made available at this https URL.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model. |
luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5 | 45fa2fa42a97b0478a98380d53a7a50ad0177cec | 2021-10-26T11:37:15.000Z | [
"pytorch",
"roberta",
"transformers"
]
| null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5 | 15 | null | transformers | 9,536 | Entry not found |
luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_norm_bb_mlm_loss | ef1df072f813dcd203c30e97b7b273ed1f4e33ad | 2021-10-26T04:03:29.000Z | [
"pytorch",
"roberta",
"transformers"
]
| null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_norm_bb_mlm_loss | 15 | null | transformers | 9,537 | Entry not found |
m3hrdadfi/zabanshenas-roberta-base-mix | a82c58d3632aaedec457182ea6e65d523fc960b0 | 2021-06-24T19:40:27.000Z | [
"pytorch",
"tf",
"roberta",
"text-classification",
"multilingual",
"dataset:wili_2018",
"transformers",
"license:apache-2.0"
]
| text-classification | false | m3hrdadfi | null | m3hrdadfi/zabanshenas-roberta-base-mix | 15 | 1 | transformers | 9,538 | ---
language: multilingual
license: apache-2.0
datasets:
- wili_2018
---
# Zabanshenas - Language Detector
Zabanshenas is a Transformer-based solution for identifying the most likely language of a written document/text. Zabanshenas is a Persian word that has two meanings:
- A person who studies linguistics.
- A way to identify the type of written language.
## How to use
Follow [Zabanshenas repo](https://github.com/m3hrdadfi/zabanshenas) for more information!
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
### By Paragraph
| language | precision | recall | f1-score |
|:--------------------------------------:|:---------:|:--------:|:--------:|
| Achinese (ace) | 1.000000 | 0.982143 | 0.990991 |
| Afrikaans (afr) | 1.000000 | 1.000000 | 1.000000 |
| Alemannic German (als) | 1.000000 | 0.946429 | 0.972477 |
| Amharic (amh) | 1.000000 | 0.982143 | 0.990991 |
| Old English (ang) | 0.981818 | 0.964286 | 0.972973 |
| Arabic (ara) | 0.846154 | 0.982143 | 0.909091 |
| Aragonese (arg) | 1.000000 | 1.000000 | 1.000000 |
| Egyptian Arabic (arz) | 0.979592 | 0.857143 | 0.914286 |
| Assamese (asm) | 0.981818 | 0.964286 | 0.972973 |
| Asturian (ast) | 0.964912 | 0.982143 | 0.973451 |
| Avar (ava) | 0.941176 | 0.905660 | 0.923077 |
| Aymara (aym) | 0.964912 | 0.982143 | 0.973451 |
| South Azerbaijani (azb) | 0.965517 | 1.000000 | 0.982456 |
| Azerbaijani (aze) | 1.000000 | 1.000000 | 1.000000 |
| Bashkir (bak) | 1.000000 | 0.978261 | 0.989011 |
| Bavarian (bar) | 0.843750 | 0.964286 | 0.900000 |
| Central Bikol (bcl) | 1.000000 | 0.982143 | 0.990991 |
| Belarusian (Taraschkewiza) (be-tarask) | 1.000000 | 0.875000 | 0.933333 |
| Belarusian (bel) | 0.870968 | 0.964286 | 0.915254 |
| Bengali (ben) | 0.982143 | 0.982143 | 0.982143 |
| Bhojpuri (bho) | 1.000000 | 0.928571 | 0.962963 |
| Banjar (bjn) | 0.981132 | 0.945455 | 0.962963 |
| Tibetan (bod) | 1.000000 | 0.982143 | 0.990991 |
| Bosnian (bos) | 0.552632 | 0.375000 | 0.446809 |
| Bishnupriya (bpy) | 1.000000 | 0.982143 | 0.990991 |
| Breton (bre) | 1.000000 | 0.964286 | 0.981818 |
| Bulgarian (bul) | 1.000000 | 0.964286 | 0.981818 |
| Buryat (bxr) | 0.946429 | 0.946429 | 0.946429 |
| Catalan (cat) | 0.982143 | 0.982143 | 0.982143 |
| Chavacano (cbk) | 0.914894 | 0.767857 | 0.834951 |
| Min Dong (cdo) | 1.000000 | 0.982143 | 0.990991 |
| Cebuano (ceb) | 1.000000 | 1.000000 | 1.000000 |
| Czech (ces) | 1.000000 | 1.000000 | 1.000000 |
| Chechen (che) | 1.000000 | 1.000000 | 1.000000 |
| Cherokee (chr) | 1.000000 | 0.963636 | 0.981481 |
| Chuvash (chv) | 0.938776 | 0.958333 | 0.948454 |
| Central Kurdish (ckb) | 1.000000 | 1.000000 | 1.000000 |
| Cornish (cor) | 1.000000 | 1.000000 | 1.000000 |
| Corsican (cos) | 1.000000 | 0.982143 | 0.990991 |
| Crimean Tatar (crh) | 1.000000 | 0.946429 | 0.972477 |
| Kashubian (csb) | 1.000000 | 0.963636 | 0.981481 |
| Welsh (cym) | 1.000000 | 1.000000 | 1.000000 |
| Danish (dan) | 1.000000 | 1.000000 | 1.000000 |
| German (deu) | 0.828125 | 0.946429 | 0.883333 |
| Dimli (diq) | 0.964912 | 0.982143 | 0.973451 |
| Dhivehi (div) | 1.000000 | 1.000000 | 1.000000 |
| Lower Sorbian (dsb) | 1.000000 | 0.982143 | 0.990991 |
| Doteli (dty) | 0.940000 | 0.854545 | 0.895238 |
| Emilian (egl) | 1.000000 | 0.928571 | 0.962963 |
| Modern Greek (ell) | 1.000000 | 1.000000 | 1.000000 |
| English (eng) | 0.588889 | 0.946429 | 0.726027 |
| Esperanto (epo) | 1.000000 | 0.982143 | 0.990991 |
| Estonian (est) | 0.963636 | 0.946429 | 0.954955 |
| Basque (eus) | 1.000000 | 0.982143 | 0.990991 |
| Extremaduran (ext) | 0.982143 | 0.982143 | 0.982143 |
| Faroese (fao) | 1.000000 | 1.000000 | 1.000000 |
| Persian (fas) | 0.948276 | 0.982143 | 0.964912 |
| Finnish (fin) | 1.000000 | 1.000000 | 1.000000 |
| French (fra) | 0.710145 | 0.875000 | 0.784000 |
| Arpitan (frp) | 1.000000 | 0.946429 | 0.972477 |
| Western Frisian (fry) | 0.982143 | 0.982143 | 0.982143 |
| Friulian (fur) | 1.000000 | 0.982143 | 0.990991 |
| Gagauz (gag) | 0.981132 | 0.945455 | 0.962963 |
| Scottish Gaelic (gla) | 0.982143 | 0.982143 | 0.982143 |
| Irish (gle) | 0.949153 | 1.000000 | 0.973913 |
| Galician (glg) | 1.000000 | 1.000000 | 1.000000 |
| Gilaki (glk) | 0.981132 | 0.945455 | 0.962963 |
| Manx (glv) | 1.000000 | 1.000000 | 1.000000 |
| Guarani (grn) | 1.000000 | 0.964286 | 0.981818 |
| Gujarati (guj) | 1.000000 | 0.982143 | 0.990991 |
| Hakka Chinese (hak) | 0.981818 | 0.964286 | 0.972973 |
| Haitian Creole (hat) | 1.000000 | 1.000000 | 1.000000 |
| Hausa (hau) | 1.000000 | 0.945455 | 0.971963 |
| Serbo-Croatian (hbs) | 0.448276 | 0.464286 | 0.456140 |
| Hebrew (heb) | 1.000000 | 0.982143 | 0.990991 |
| Fiji Hindi (hif) | 0.890909 | 0.890909 | 0.890909 |
| Hindi (hin) | 0.981481 | 0.946429 | 0.963636 |
| Croatian (hrv) | 0.500000 | 0.636364 | 0.560000 |
| Upper Sorbian (hsb) | 0.955556 | 1.000000 | 0.977273 |
| Hungarian (hun) | 1.000000 | 1.000000 | 1.000000 |
| Armenian (hye) | 1.000000 | 0.981818 | 0.990826 |
| Igbo (ibo) | 0.918033 | 1.000000 | 0.957265 |
| Ido (ido) | 1.000000 | 1.000000 | 1.000000 |
| Interlingue (ile) | 1.000000 | 0.962264 | 0.980769 |
| Iloko (ilo) | 0.947368 | 0.964286 | 0.955752 |
| Interlingua (ina) | 1.000000 | 1.000000 | 1.000000 |
| Indonesian (ind) | 0.761905 | 0.872727 | 0.813559 |
| Icelandic (isl) | 1.000000 | 1.000000 | 1.000000 |
| Italian (ita) | 0.861538 | 1.000000 | 0.925620 |
| Jamaican Patois (jam) | 1.000000 | 0.946429 | 0.972477 |
| Javanese (jav) | 0.964912 | 0.982143 | 0.973451 |
| Lojban (jbo) | 1.000000 | 1.000000 | 1.000000 |
| Japanese (jpn) | 1.000000 | 1.000000 | 1.000000 |
| Karakalpak (kaa) | 0.965517 | 1.000000 | 0.982456 |
| Kabyle (kab) | 1.000000 | 0.964286 | 0.981818 |
| Kannada (kan) | 0.982143 | 0.982143 | 0.982143 |
| Georgian (kat) | 1.000000 | 0.964286 | 0.981818 |
| Kazakh (kaz) | 0.980769 | 0.980769 | 0.980769 |
| Kabardian (kbd) | 1.000000 | 0.982143 | 0.990991 |
| Central Khmer (khm) | 0.960784 | 0.875000 | 0.915888 |
| Kinyarwanda (kin) | 0.981132 | 0.928571 | 0.954128 |
| Kirghiz (kir) | 1.000000 | 1.000000 | 1.000000 |
| Komi-Permyak (koi) | 0.962264 | 0.910714 | 0.935780 |
| Konkani (kok) | 0.964286 | 0.981818 | 0.972973 |
| Komi (kom) | 1.000000 | 0.962264 | 0.980769 |
| Korean (kor) | 1.000000 | 1.000000 | 1.000000 |
| Karachay-Balkar (krc) | 1.000000 | 0.982143 | 0.990991 |
| Ripuarisch (ksh) | 1.000000 | 0.964286 | 0.981818 |
| Kurdish (kur) | 1.000000 | 0.964286 | 0.981818 |
| Ladino (lad) | 1.000000 | 1.000000 | 1.000000 |
| Lao (lao) | 0.961538 | 0.909091 | 0.934579 |
| Latin (lat) | 0.877193 | 0.943396 | 0.909091 |
| Latvian (lav) | 0.963636 | 0.946429 | 0.954955 |
| Lezghian (lez) | 1.000000 | 0.964286 | 0.981818 |
| Ligurian (lij) | 1.000000 | 0.964286 | 0.981818 |
| Limburgan (lim) | 0.938776 | 1.000000 | 0.968421 |
| Lingala (lin) | 0.980769 | 0.927273 | 0.953271 |
| Lithuanian (lit) | 0.982456 | 1.000000 | 0.991150 |
| Lombard (lmo) | 1.000000 | 1.000000 | 1.000000 |
| Northern Luri (lrc) | 1.000000 | 0.928571 | 0.962963 |
| Latgalian (ltg) | 1.000000 | 0.982143 | 0.990991 |
| Luxembourgish (ltz) | 0.949153 | 1.000000 | 0.973913 |
| Luganda (lug) | 1.000000 | 1.000000 | 1.000000 |
| Literary Chinese (lzh) | 1.000000 | 1.000000 | 1.000000 |
| Maithili (mai) | 0.931034 | 0.964286 | 0.947368 |
| Malayalam (mal) | 1.000000 | 0.982143 | 0.990991 |
| Banyumasan (map-bms) | 0.977778 | 0.785714 | 0.871287 |
| Marathi (mar) | 0.949153 | 1.000000 | 0.973913 |
| Moksha (mdf) | 0.980000 | 0.890909 | 0.933333 |
| Eastern Mari (mhr) | 0.981818 | 0.964286 | 0.972973 |
| Minangkabau (min) | 1.000000 | 1.000000 | 1.000000 |
| Macedonian (mkd) | 1.000000 | 0.981818 | 0.990826 |
| Malagasy (mlg) | 0.981132 | 1.000000 | 0.990476 |
| Maltese (mlt) | 0.982456 | 1.000000 | 0.991150 |
| Min Nan Chinese (nan) | 1.000000 | 1.000000 | 1.000000 |
| Mongolian (mon) | 1.000000 | 0.981818 | 0.990826 |
| Maori (mri) | 1.000000 | 1.000000 | 1.000000 |
| Western Mari (mrj) | 0.982456 | 1.000000 | 0.991150 |
| Malay (msa) | 0.862069 | 0.892857 | 0.877193 |
| Mirandese (mwl) | 1.000000 | 0.982143 | 0.990991 |
| Burmese (mya) | 1.000000 | 1.000000 | 1.000000 |
| Erzya (myv) | 0.818182 | 0.964286 | 0.885246 |
| Mazanderani (mzn) | 0.981481 | 1.000000 | 0.990654 |
| Neapolitan (nap) | 1.000000 | 0.981818 | 0.990826 |
| Navajo (nav) | 1.000000 | 1.000000 | 1.000000 |
| Classical Nahuatl (nci) | 0.981481 | 0.946429 | 0.963636 |
| Low German (nds) | 0.982143 | 0.982143 | 0.982143 |
| West Low German (nds-nl) | 1.000000 | 1.000000 | 1.000000 |
| Nepali (macrolanguage) (nep) | 0.881356 | 0.928571 | 0.904348 |
| Newari (new) | 1.000000 | 0.909091 | 0.952381 |
| Dutch (nld) | 0.982143 | 0.982143 | 0.982143 |
| Norwegian Nynorsk (nno) | 1.000000 | 1.000000 | 1.000000 |
| Bokmål (nob) | 1.000000 | 1.000000 | 1.000000 |
| Narom (nrm) | 0.981818 | 0.964286 | 0.972973 |
| Northern Sotho (nso) | 1.000000 | 1.000000 | 1.000000 |
| Occitan (oci) | 0.903846 | 0.839286 | 0.870370 |
| Livvi-Karelian (olo) | 0.982456 | 1.000000 | 0.991150 |
| Oriya (ori) | 0.964912 | 0.982143 | 0.973451 |
| Oromo (orm) | 0.982143 | 0.982143 | 0.982143 |
| Ossetian (oss) | 0.982143 | 1.000000 | 0.990991 |
| Pangasinan (pag) | 0.980000 | 0.875000 | 0.924528 |
| Pampanga (pam) | 0.928571 | 0.896552 | 0.912281 |
| Panjabi (pan) | 1.000000 | 1.000000 | 1.000000 |
| Papiamento (pap) | 1.000000 | 0.964286 | 0.981818 |
| Picard (pcd) | 0.849057 | 0.849057 | 0.849057 |
| Pennsylvania German (pdc) | 0.854839 | 0.946429 | 0.898305 |
| Palatine German (pfl) | 0.946429 | 0.946429 | 0.946429 |
| Western Panjabi (pnb) | 0.981132 | 0.962963 | 0.971963 |
| Polish (pol) | 0.933333 | 1.000000 | 0.965517 |
| Portuguese (por) | 0.774648 | 0.982143 | 0.866142 |
| Pushto (pus) | 1.000000 | 0.910714 | 0.953271 |
| Quechua (que) | 0.962963 | 0.928571 | 0.945455 |
| Tarantino dialect (roa-tara) | 1.000000 | 0.964286 | 0.981818 |
| Romansh (roh) | 1.000000 | 0.928571 | 0.962963 |
| Romanian (ron) | 0.965517 | 1.000000 | 0.982456 |
| Rusyn (rue) | 0.946429 | 0.946429 | 0.946429 |
| Aromanian (rup) | 0.962963 | 0.928571 | 0.945455 |
| Russian (rus) | 0.859375 | 0.982143 | 0.916667 |
| Yakut (sah) | 1.000000 | 0.982143 | 0.990991 |
| Sanskrit (san) | 0.982143 | 0.982143 | 0.982143 |
| Sicilian (scn) | 1.000000 | 1.000000 | 1.000000 |
| Scots (sco) | 0.982143 | 0.982143 | 0.982143 |
| Samogitian (sgs) | 1.000000 | 0.982143 | 0.990991 |
| Sinhala (sin) | 0.964912 | 0.982143 | 0.973451 |
| Slovak (slk) | 1.000000 | 0.982143 | 0.990991 |
| Slovene (slv) | 1.000000 | 0.981818 | 0.990826 |
| Northern Sami (sme) | 0.962264 | 0.962264 | 0.962264 |
| Shona (sna) | 0.933333 | 1.000000 | 0.965517 |
| Sindhi (snd) | 1.000000 | 1.000000 | 1.000000 |
| Somali (som) | 0.948276 | 1.000000 | 0.973451 |
| Spanish (spa) | 0.739130 | 0.910714 | 0.816000 |
| Albanian (sqi) | 0.982143 | 0.982143 | 0.982143 |
| Sardinian (srd) | 1.000000 | 0.982143 | 0.990991 |
| Sranan (srn) | 1.000000 | 1.000000 | 1.000000 |
| Serbian (srp) | 1.000000 | 0.946429 | 0.972477 |
| Saterfriesisch (stq) | 1.000000 | 0.964286 | 0.981818 |
| Sundanese (sun) | 1.000000 | 0.977273 | 0.988506 |
| Swahili (macrolanguage) (swa) | 1.000000 | 1.000000 | 1.000000 |
| Swedish (swe) | 1.000000 | 1.000000 | 1.000000 |
| Silesian (szl) | 1.000000 | 0.981481 | 0.990654 |
| Tamil (tam) | 0.982143 | 1.000000 | 0.990991 |
| Tatar (tat) | 1.000000 | 1.000000 | 1.000000 |
| Tulu (tcy) | 0.982456 | 1.000000 | 0.991150 |
| Telugu (tel) | 1.000000 | 0.920000 | 0.958333 |
| Tetum (tet) | 1.000000 | 0.964286 | 0.981818 |
| Tajik (tgk) | 1.000000 | 1.000000 | 1.000000 |
| Tagalog (tgl) | 1.000000 | 1.000000 | 1.000000 |
| Thai (tha) | 0.932203 | 0.982143 | 0.956522 |
| Tongan (ton) | 1.000000 | 0.964286 | 0.981818 |
| Tswana (tsn) | 1.000000 | 1.000000 | 1.000000 |
| Turkmen (tuk) | 1.000000 | 0.982143 | 0.990991 |
| Turkish (tur) | 0.901639 | 0.982143 | 0.940171 |
| Tuvan (tyv) | 1.000000 | 0.964286 | 0.981818 |
| Udmurt (udm) | 1.000000 | 0.982143 | 0.990991 |
| Uighur (uig) | 1.000000 | 0.982143 | 0.990991 |
| Ukrainian (ukr) | 0.963636 | 0.946429 | 0.954955 |
| Urdu (urd) | 1.000000 | 0.982143 | 0.990991 |
| Uzbek (uzb) | 1.000000 | 1.000000 | 1.000000 |
| Venetian (vec) | 1.000000 | 0.982143 | 0.990991 |
| Veps (vep) | 0.982456 | 1.000000 | 0.991150 |
| Vietnamese (vie) | 0.964912 | 0.982143 | 0.973451 |
| Vlaams (vls) | 1.000000 | 0.982143 | 0.990991 |
| Volapük (vol) | 1.000000 | 1.000000 | 1.000000 |
| Võro (vro) | 0.964286 | 0.964286 | 0.964286 |
| Waray (war) | 1.000000 | 0.982143 | 0.990991 |
| Walloon (wln) | 1.000000 | 1.000000 | 1.000000 |
| Wolof (wol) | 0.981481 | 0.963636 | 0.972477 |
| Wu Chinese (wuu) | 0.981481 | 0.946429 | 0.963636 |
| Xhosa (xho) | 1.000000 | 0.964286 | 0.981818 |
| Mingrelian (xmf) | 1.000000 | 0.964286 | 0.981818 |
| Yiddish (yid) | 1.000000 | 1.000000 | 1.000000 |
| Yoruba (yor) | 0.964912 | 0.982143 | 0.973451 |
| Zeeuws (zea) | 1.000000 | 0.982143 | 0.990991 |
| Cantonese (zh-yue) | 0.981481 | 0.946429 | 0.963636 |
| Standard Chinese (zho) | 0.932203 | 0.982143 | 0.956522 |
| accuracy | 0.963055 | 0.963055 | 0.963055 |
| macro avg | 0.966424 | 0.963216 | 0.963891 |
| weighted avg | 0.966040 | 0.963055 | 0.963606 |
### By Sentence
| language | precision | recall | f1-score |
|:--------------------------------------:|:---------:|:--------:|:--------:|
| Achinese (ace) | 0.754545 | 0.873684 | 0.809756 |
| Afrikaans (afr) | 0.708955 | 0.940594 | 0.808511 |
| Alemannic German (als) | 0.870130 | 0.752809 | 0.807229 |
| Amharic (amh) | 1.000000 | 0.820000 | 0.901099 |
| Old English (ang) | 0.966667 | 0.906250 | 0.935484 |
| Arabic (ara) | 0.907692 | 0.967213 | 0.936508 |
| Aragonese (arg) | 0.921569 | 0.959184 | 0.940000 |
| Egyptian Arabic (arz) | 0.964286 | 0.843750 | 0.900000 |
| Assamese (asm) | 0.964286 | 0.870968 | 0.915254 |
| Asturian (ast) | 0.880000 | 0.795181 | 0.835443 |
| Avar (ava) | 0.864198 | 0.843373 | 0.853659 |
| Aymara (aym) | 1.000000 | 0.901961 | 0.948454 |
| South Azerbaijani (azb) | 0.979381 | 0.989583 | 0.984456 |
| Azerbaijani (aze) | 0.989899 | 0.960784 | 0.975124 |
| Bashkir (bak) | 0.837209 | 0.857143 | 0.847059 |
| Bavarian (bar) | 0.741935 | 0.766667 | 0.754098 |
| Central Bikol (bcl) | 0.962963 | 0.928571 | 0.945455 |
| Belarusian (Taraschkewiza) (be-tarask) | 0.857143 | 0.733333 | 0.790419 |
| Belarusian (bel) | 0.775510 | 0.752475 | 0.763819 |
| Bengali (ben) | 0.861111 | 0.911765 | 0.885714 |
| Bhojpuri (bho) | 0.965517 | 0.933333 | 0.949153 |
| Banjar (bjn) | 0.891566 | 0.880952 | 0.886228 |
| Tibetan (bod) | 1.000000 | 1.000000 | 1.000000 |
| Bosnian (bos) | 0.375000 | 0.323077 | 0.347107 |
| Bishnupriya (bpy) | 0.986301 | 1.000000 | 0.993103 |
| Breton (bre) | 0.951613 | 0.893939 | 0.921875 |
| Bulgarian (bul) | 0.945055 | 0.877551 | 0.910053 |
| Buryat (bxr) | 0.955556 | 0.843137 | 0.895833 |
| Catalan (cat) | 0.692308 | 0.750000 | 0.720000 |
| Chavacano (cbk) | 0.842857 | 0.641304 | 0.728395 |
| Min Dong (cdo) | 0.972973 | 1.000000 | 0.986301 |
| Cebuano (ceb) | 0.981308 | 0.954545 | 0.967742 |
| Czech (ces) | 0.944444 | 0.915385 | 0.929687 |
| Chechen (che) | 0.875000 | 0.700000 | 0.777778 |
| Cherokee (chr) | 1.000000 | 0.970588 | 0.985075 |
| Chuvash (chv) | 0.875000 | 0.836957 | 0.855556 |
| Central Kurdish (ckb) | 1.000000 | 0.983051 | 0.991453 |
| Cornish (cor) | 0.979592 | 0.969697 | 0.974619 |
| Corsican (cos) | 0.986842 | 0.925926 | 0.955414 |
| Crimean Tatar (crh) | 0.958333 | 0.907895 | 0.932432 |
| Kashubian (csb) | 0.920354 | 0.904348 | 0.912281 |
| Welsh (cym) | 0.971014 | 0.943662 | 0.957143 |
| Danish (dan) | 0.865169 | 0.777778 | 0.819149 |
| German (deu) | 0.721311 | 0.822430 | 0.768559 |
| Dimli (diq) | 0.915966 | 0.923729 | 0.919831 |
| Dhivehi (div) | 1.000000 | 0.991228 | 0.995595 |
| Lower Sorbian (dsb) | 0.898876 | 0.879121 | 0.888889 |
| Doteli (dty) | 0.821429 | 0.638889 | 0.718750 |
| Emilian (egl) | 0.988095 | 0.922222 | 0.954023 |
| Modern Greek (ell) | 0.988636 | 0.966667 | 0.977528 |
| English (eng) | 0.522727 | 0.784091 | 0.627273 |
| Esperanto (epo) | 0.963855 | 0.930233 | 0.946746 |
| Estonian (est) | 0.922222 | 0.873684 | 0.897297 |
| Basque (eus) | 1.000000 | 0.941176 | 0.969697 |
| Extremaduran (ext) | 0.925373 | 0.885714 | 0.905109 |
| Faroese (fao) | 0.855072 | 0.887218 | 0.870849 |
| Persian (fas) | 0.879630 | 0.979381 | 0.926829 |
| Finnish (fin) | 0.952830 | 0.943925 | 0.948357 |
| French (fra) | 0.676768 | 0.943662 | 0.788235 |
| Arpitan (frp) | 0.867925 | 0.807018 | 0.836364 |
| Western Frisian (fry) | 0.956989 | 0.890000 | 0.922280 |
| Friulian (fur) | 1.000000 | 0.857143 | 0.923077 |
| Gagauz (gag) | 0.939024 | 0.802083 | 0.865169 |
| Scottish Gaelic (gla) | 1.000000 | 0.879121 | 0.935673 |
| Irish (gle) | 0.989247 | 0.958333 | 0.973545 |
| Galician (glg) | 0.910256 | 0.922078 | 0.916129 |
| Gilaki (glk) | 0.964706 | 0.872340 | 0.916201 |
| Manx (glv) | 1.000000 | 0.965517 | 0.982456 |
| Guarani (grn) | 0.983333 | 1.000000 | 0.991597 |
| Gujarati (guj) | 1.000000 | 0.991525 | 0.995745 |
| Hakka Chinese (hak) | 0.955224 | 0.955224 | 0.955224 |
| Haitian Creole (hat) | 0.833333 | 0.666667 | 0.740741 |
| Hausa (hau) | 0.936709 | 0.913580 | 0.925000 |
| Serbo-Croatian (hbs) | 0.452830 | 0.410256 | 0.430493 |
| Hebrew (heb) | 0.988235 | 0.976744 | 0.982456 |
| Fiji Hindi (hif) | 0.936709 | 0.840909 | 0.886228 |
| Hindi (hin) | 0.965517 | 0.756757 | 0.848485 |
| Croatian (hrv) | 0.443820 | 0.537415 | 0.486154 |
| Upper Sorbian (hsb) | 0.951613 | 0.830986 | 0.887218 |
| Hungarian (hun) | 0.854701 | 0.909091 | 0.881057 |
| Armenian (hye) | 1.000000 | 0.816327 | 0.898876 |
| Igbo (ibo) | 0.974359 | 0.926829 | 0.950000 |
| Ido (ido) | 0.975000 | 0.987342 | 0.981132 |
| Interlingue (ile) | 0.880597 | 0.921875 | 0.900763 |
| Iloko (ilo) | 0.882353 | 0.821918 | 0.851064 |
| Interlingua (ina) | 0.952381 | 0.895522 | 0.923077 |
| Indonesian (ind) | 0.606383 | 0.695122 | 0.647727 |
| Icelandic (isl) | 0.978261 | 0.882353 | 0.927835 |
| Italian (ita) | 0.910448 | 0.910448 | 0.910448 |
| Jamaican Patois (jam) | 0.988764 | 0.967033 | 0.977778 |
| Javanese (jav) | 0.903614 | 0.862069 | 0.882353 |
| Lojban (jbo) | 0.943878 | 0.929648 | 0.936709 |
| Japanese (jpn) | 1.000000 | 0.764706 | 0.866667 |
| Karakalpak (kaa) | 0.940171 | 0.901639 | 0.920502 |
| Kabyle (kab) | 0.985294 | 0.837500 | 0.905405 |
| Kannada (kan) | 0.975806 | 0.975806 | 0.975806 |
| Georgian (kat) | 0.953704 | 0.903509 | 0.927928 |
| Kazakh (kaz) | 0.934579 | 0.877193 | 0.904977 |
| Kabardian (kbd) | 0.987952 | 0.953488 | 0.970414 |
| Central Khmer (khm) | 0.928571 | 0.829787 | 0.876404 |
| Kinyarwanda (kin) | 0.953125 | 0.938462 | 0.945736 |
| Kirghiz (kir) | 0.927632 | 0.881250 | 0.903846 |
| Komi-Permyak (koi) | 0.750000 | 0.776786 | 0.763158 |
| Konkani (kok) | 0.893491 | 0.872832 | 0.883041 |
| Komi (kom) | 0.734177 | 0.690476 | 0.711656 |
| Korean (kor) | 0.989899 | 0.989899 | 0.989899 |
| Karachay-Balkar (krc) | 0.928571 | 0.917647 | 0.923077 |
| Ripuarisch (ksh) | 0.915789 | 0.896907 | 0.906250 |
| Kurdish (kur) | 0.977528 | 0.935484 | 0.956044 |
| Ladino (lad) | 0.985075 | 0.904110 | 0.942857 |
| Lao (lao) | 0.896552 | 0.812500 | 0.852459 |
| Latin (lat) | 0.741935 | 0.831325 | 0.784091 |
| Latvian (lav) | 0.710526 | 0.878049 | 0.785455 |
| Lezghian (lez) | 0.975309 | 0.877778 | 0.923977 |
| Ligurian (lij) | 0.951807 | 0.897727 | 0.923977 |
| Limburgan (lim) | 0.909091 | 0.921053 | 0.915033 |
| Lingala (lin) | 0.942857 | 0.814815 | 0.874172 |
| Lithuanian (lit) | 0.892857 | 0.925926 | 0.909091 |
| Lombard (lmo) | 0.766234 | 0.951613 | 0.848921 |
| Northern Luri (lrc) | 0.972222 | 0.875000 | 0.921053 |
| Latgalian (ltg) | 0.895349 | 0.865169 | 0.880000 |
| Luxembourgish (ltz) | 0.882353 | 0.750000 | 0.810811 |
| Luganda (lug) | 0.946429 | 0.883333 | 0.913793 |
| Literary Chinese (lzh) | 1.000000 | 1.000000 | 1.000000 |
| Maithili (mai) | 0.893617 | 0.823529 | 0.857143 |
| Malayalam (mal) | 1.000000 | 0.975000 | 0.987342 |
| Banyumasan (map-bms) | 0.924242 | 0.772152 | 0.841379 |
| Marathi (mar) | 0.874126 | 0.919118 | 0.896057 |
| Moksha (mdf) | 0.771242 | 0.830986 | 0.800000 |
| Eastern Mari (mhr) | 0.820000 | 0.860140 | 0.839590 |
| Minangkabau (min) | 0.973684 | 0.973684 | 0.973684 |
| Macedonian (mkd) | 0.895652 | 0.953704 | 0.923767 |
| Malagasy (mlg) | 1.000000 | 0.966102 | 0.982759 |
| Maltese (mlt) | 0.987952 | 0.964706 | 0.976190 |
| Min Nan Chinese (nan) | 0.975000 | 1.000000 | 0.987342 |
| Mongolian (mon) | 0.954545 | 0.933333 | 0.943820 |
| Maori (mri) | 0.985294 | 1.000000 | 0.992593 |
| Western Mari (mrj) | 0.966292 | 0.914894 | 0.939891 |
| Malay (msa) | 0.770270 | 0.695122 | 0.730769 |
| Mirandese (mwl) | 0.970588 | 0.891892 | 0.929577 |
| Burmese (mya) | 1.000000 | 0.964286 | 0.981818 |
| Erzya (myv) | 0.535714 | 0.681818 | 0.600000 |
| Mazanderani (mzn) | 0.968750 | 0.898551 | 0.932331 |
| Neapolitan (nap) | 0.892308 | 0.865672 | 0.878788 |
| Navajo (nav) | 0.984375 | 0.984375 | 0.984375 |
| Classical Nahuatl (nci) | 0.901408 | 0.761905 | 0.825806 |
| Low German (nds) | 0.896226 | 0.913462 | 0.904762 |
| West Low German (nds-nl) | 0.873563 | 0.835165 | 0.853933 |
| Nepali (macrolanguage) (nep) | 0.704545 | 0.861111 | 0.775000 |
| Newari (new) | 0.920000 | 0.741935 | 0.821429 |
| Dutch (nld) | 0.925926 | 0.872093 | 0.898204 |
| Norwegian Nynorsk (nno) | 0.847059 | 0.808989 | 0.827586 |
| Bokmål (nob) | 0.861386 | 0.852941 | 0.857143 |
| Narom (nrm) | 0.966667 | 0.983051 | 0.974790 |
| Northern Sotho (nso) | 0.897436 | 0.921053 | 0.909091 |
| Occitan (oci) | 0.958333 | 0.696970 | 0.807018 |
| Livvi-Karelian (olo) | 0.967742 | 0.937500 | 0.952381 |
| Oriya (ori) | 0.933333 | 1.000000 | 0.965517 |
| Oromo (orm) | 0.977528 | 0.915789 | 0.945652 |
| Ossetian (oss) | 0.958333 | 0.841463 | 0.896104 |
| Pangasinan (pag) | 0.847328 | 0.909836 | 0.877470 |
| Pampanga (pam) | 0.969697 | 0.780488 | 0.864865 |
| Panjabi (pan) | 1.000000 | 1.000000 | 1.000000 |
| Papiamento (pap) | 0.876190 | 0.920000 | 0.897561 |
| Picard (pcd) | 0.707317 | 0.568627 | 0.630435 |
| Pennsylvania German (pdc) | 0.827273 | 0.827273 | 0.827273 |
| Palatine German (pfl) | 0.882353 | 0.914634 | 0.898204 |
| Western Panjabi (pnb) | 0.964286 | 0.931034 | 0.947368 |
| Polish (pol) | 0.859813 | 0.910891 | 0.884615 |
| Portuguese (por) | 0.535714 | 0.833333 | 0.652174 |
| Pushto (pus) | 0.989362 | 0.902913 | 0.944162 |
| Quechua (que) | 0.979167 | 0.903846 | 0.940000 |
| Tarantino dialect (roa-tara) | 0.964912 | 0.901639 | 0.932203 |
| Romansh (roh) | 0.914894 | 0.895833 | 0.905263 |
| Romanian (ron) | 0.880597 | 0.880597 | 0.880597 |
| Rusyn (rue) | 0.932584 | 0.805825 | 0.864583 |
| Aromanian (rup) | 0.783333 | 0.758065 | 0.770492 |
| Russian (rus) | 0.517986 | 0.765957 | 0.618026 |
| Yakut (sah) | 0.954023 | 0.922222 | 0.937853 |
| Sanskrit (san) | 0.866667 | 0.951220 | 0.906977 |
| Sicilian (scn) | 0.984375 | 0.940299 | 0.961832 |
| Scots (sco) | 0.851351 | 0.900000 | 0.875000 |
| Samogitian (sgs) | 0.977011 | 0.876289 | 0.923913 |
| Sinhala (sin) | 0.406154 | 0.985075 | 0.575163 |
| Slovak (slk) | 0.956989 | 0.872549 | 0.912821 |
| Slovene (slv) | 0.907216 | 0.854369 | 0.880000 |
| Northern Sami (sme) | 0.949367 | 0.892857 | 0.920245 |
| Shona (sna) | 0.936508 | 0.855072 | 0.893939 |
| Sindhi (snd) | 0.984962 | 0.992424 | 0.988679 |
| Somali (som) | 0.949153 | 0.848485 | 0.896000 |
| Spanish (spa) | 0.584158 | 0.746835 | 0.655556 |
| Albanian (sqi) | 0.988095 | 0.912088 | 0.948571 |
| Sardinian (srd) | 0.957746 | 0.931507 | 0.944444 |
| Sranan (srn) | 0.985714 | 0.945205 | 0.965035 |
| Serbian (srp) | 0.950980 | 0.889908 | 0.919431 |
| Saterfriesisch (stq) | 0.962500 | 0.875000 | 0.916667 |
| Sundanese (sun) | 0.778846 | 0.910112 | 0.839378 |
| Swahili (macrolanguage) (swa) | 0.915493 | 0.878378 | 0.896552 |
| Swedish (swe) | 0.989247 | 0.958333 | 0.973545 |
| Silesian (szl) | 0.944444 | 0.904255 | 0.923913 |
| Tamil (tam) | 0.990000 | 0.970588 | 0.980198 |
| Tatar (tat) | 0.942029 | 0.902778 | 0.921986 |
| Tulu (tcy) | 0.980519 | 0.967949 | 0.974194 |
| Telugu (tel) | 0.965986 | 0.965986 | 0.965986 |
| Tetum (tet) | 0.898734 | 0.855422 | 0.876543 |
| Tajik (tgk) | 0.974684 | 0.939024 | 0.956522 |
| Tagalog (tgl) | 0.965909 | 0.934066 | 0.949721 |
| Thai (tha) | 0.923077 | 0.882353 | 0.902256 |
| Tongan (ton) | 0.970149 | 0.890411 | 0.928571 |
| Tswana (tsn) | 0.888889 | 0.926316 | 0.907216 |
| Turkmen (tuk) | 0.968000 | 0.889706 | 0.927203 |
| Turkish (tur) | 0.871287 | 0.926316 | 0.897959 |
| Tuvan (tyv) | 0.948454 | 0.859813 | 0.901961 |
| Udmurt (udm) | 0.989362 | 0.894231 | 0.939394 |
| Uighur (uig) | 1.000000 | 0.953333 | 0.976109 |
| Ukrainian (ukr) | 0.893617 | 0.875000 | 0.884211 |
| Urdu (urd) | 1.000000 | 1.000000 | 1.000000 |
| Uzbek (uzb) | 0.636042 | 0.886700 | 0.740741 |
| Venetian (vec) | 1.000000 | 0.941176 | 0.969697 |
| Veps (vep) | 0.858586 | 0.965909 | 0.909091 |
| Vietnamese (vie) | 1.000000 | 0.940476 | 0.969325 |
| Vlaams (vls) | 0.885714 | 0.898551 | 0.892086 |
| Volapük (vol) | 0.975309 | 0.975309 | 0.975309 |
| Võro (vro) | 0.855670 | 0.864583 | 0.860104 |
| Waray (war) | 0.972222 | 0.909091 | 0.939597 |
| Walloon (wln) | 0.742138 | 0.893939 | 0.810997 |
| Wolof (wol) | 0.882979 | 0.954023 | 0.917127 |
| Wu Chinese (wuu) | 0.961538 | 0.833333 | 0.892857 |
| Xhosa (xho) | 0.934066 | 0.867347 | 0.899471 |
| Mingrelian (xmf) | 0.958333 | 0.929293 | 0.943590 |
| Yiddish (yid) | 0.984375 | 0.875000 | 0.926471 |
| Yoruba (yor) | 0.868421 | 0.857143 | 0.862745 |
| Zeeuws (zea) | 0.879518 | 0.793478 | 0.834286 |
| Cantonese (zh-yue) | 0.896552 | 0.812500 | 0.852459 |
| Standard Chinese (zho) | 0.906250 | 0.935484 | 0.920635 |
| accuracy | 0.881051 | 0.881051 | 0.881051 |
| macro avg | 0.903245 | 0.880618 | 0.888996 |
| weighted avg | 0.894174 | 0.881051 | 0.884520 |
### By Token (3 to 5)
| language | precision | recall | f1-score |
|:--------------------------------------:|:---------:|:--------:|:--------:|
| Achinese (ace) | 0.873846 | 0.827988 | 0.850299 |
| Afrikaans (afr) | 0.638060 | 0.732334 | 0.681954 |
| Alemannic German (als) | 0.673780 | 0.547030 | 0.603825 |
| Amharic (amh) | 0.997743 | 0.954644 | 0.975717 |
| Old English (ang) | 0.840816 | 0.693603 | 0.760148 |
| Arabic (ara) | 0.768737 | 0.840749 | 0.803132 |
| Aragonese (arg) | 0.493671 | 0.505181 | 0.499360 |
| Egyptian Arabic (arz) | 0.823529 | 0.741935 | 0.780606 |
| Assamese (asm) | 0.948454 | 0.893204 | 0.920000 |
| Asturian (ast) | 0.490000 | 0.508299 | 0.498982 |
| Avar (ava) | 0.813636 | 0.655678 | 0.726166 |
| Aymara (aym) | 0.795833 | 0.779592 | 0.787629 |
| South Azerbaijani (azb) | 0.832836 | 0.863777 | 0.848024 |
| Azerbaijani (aze) | 0.867470 | 0.800000 | 0.832370 |
| Bashkir (bak) | 0.851852 | 0.750000 | 0.797688 |
| Bavarian (bar) | 0.560897 | 0.522388 | 0.540958 |
| Central Bikol (bcl) | 0.708229 | 0.668235 | 0.687651 |
| Belarusian (Taraschkewiza) (be-tarask) | 0.615635 | 0.526462 | 0.567568 |
| Belarusian (bel) | 0.539952 | 0.597855 | 0.567430 |
| Bengali (ben) | 0.830275 | 0.885086 | 0.856805 |
| Bhojpuri (bho) | 0.723118 | 0.691517 | 0.706965 |
| Banjar (bjn) | 0.619586 | 0.726269 | 0.668699 |
| Tibetan (bod) | 0.999537 | 0.991728 | 0.995617 |
| Bosnian (bos) | 0.330849 | 0.403636 | 0.363636 |
| Bishnupriya (bpy) | 0.941634 | 0.949020 | 0.945312 |
| Breton (bre) | 0.772222 | 0.745308 | 0.758527 |
| Bulgarian (bul) | 0.771505 | 0.706897 | 0.737789 |
| Buryat (bxr) | 0.741935 | 0.753149 | 0.747500 |
| Catalan (cat) | 0.528716 | 0.610136 | 0.566516 |
| Chavacano (cbk) | 0.409449 | 0.312625 | 0.354545 |
| Min Dong (cdo) | 0.951264 | 0.936057 | 0.943599 |
| Cebuano (ceb) | 0.888298 | 0.876640 | 0.882431 |
| Czech (ces) | 0.806045 | 0.758294 | 0.781441 |
| Chechen (che) | 0.857143 | 0.600000 | 0.705882 |
| Cherokee (chr) | 0.997840 | 0.952577 | 0.974684 |
| Chuvash (chv) | 0.874346 | 0.776744 | 0.822660 |
| Central Kurdish (ckb) | 0.984848 | 0.953545 | 0.968944 |
| Cornish (cor) | 0.747596 | 0.807792 | 0.776529 |
| Corsican (cos) | 0.673913 | 0.708571 | 0.690808 |
| Crimean Tatar (crh) | 0.498801 | 0.700337 | 0.582633 |
| Kashubian (csb) | 0.797059 | 0.794721 | 0.795888 |
| Welsh (cym) | 0.829609 | 0.841360 | 0.835443 |
| Danish (dan) | 0.649789 | 0.622222 | 0.635707 |
| German (deu) | 0.559406 | 0.763514 | 0.645714 |
| Dimli (diq) | 0.835580 | 0.763547 | 0.797941 |
| Dhivehi (div) | 1.000000 | 0.980645 | 0.990228 |
| Lower Sorbian (dsb) | 0.740484 | 0.694805 | 0.716918 |
| Doteli (dty) | 0.616314 | 0.527132 | 0.568245 |
| Emilian (egl) | 0.822993 | 0.769625 | 0.795414 |
| Modern Greek (ell) | 0.972043 | 0.963753 | 0.967880 |
| English (eng) | 0.260492 | 0.724346 | 0.383183 |
| Esperanto (epo) | 0.766764 | 0.716621 | 0.740845 |
| Estonian (est) | 0.698885 | 0.673835 | 0.686131 |
| Basque (eus) | 0.882716 | 0.841176 | 0.861446 |
| Extremaduran (ext) | 0.570605 | 0.511628 | 0.539510 |
| Faroese (fao) | 0.773987 | 0.784017 | 0.778970 |
| Persian (fas) | 0.709836 | 0.809346 | 0.756332 |
| Finnish (fin) | 0.866261 | 0.796089 | 0.829694 |
| French (fra) | 0.496263 | 0.700422 | 0.580927 |
| Arpitan (frp) | 0.663366 | 0.584302 | 0.621329 |
| Western Frisian (fry) | 0.750000 | 0.756148 | 0.753061 |
| Friulian (fur) | 0.713555 | 0.675545 | 0.694030 |
| Gagauz (gag) | 0.728125 | 0.677326 | 0.701807 |
| Scottish Gaelic (gla) | 0.831601 | 0.817996 | 0.824742 |
| Irish (gle) | 0.868852 | 0.801296 | 0.833708 |
| Galician (glg) | 0.469816 | 0.454315 | 0.461935 |
| Gilaki (glk) | 0.703883 | 0.687204 | 0.695444 |
| Manx (glv) | 0.873047 | 0.886905 | 0.879921 |
| Guarani (grn) | 0.848580 | 0.793510 | 0.820122 |
| Gujarati (guj) | 0.995643 | 0.926978 | 0.960084 |
| Hakka Chinese (hak) | 0.898403 | 0.904971 | 0.901675 |
| Haitian Creole (hat) | 0.719298 | 0.518987 | 0.602941 |
| Hausa (hau) | 0.815353 | 0.829114 | 0.822176 |
| Serbo-Croatian (hbs) | 0.343465 | 0.244589 | 0.285714 |
| Hebrew (heb) | 0.891304 | 0.933941 | 0.912125 |
| Fiji Hindi (hif) | 0.662577 | 0.664615 | 0.663594 |
| Hindi (hin) | 0.782301 | 0.778169 | 0.780229 |
| Croatian (hrv) | 0.360308 | 0.374000 | 0.367026 |
| Upper Sorbian (hsb) | 0.745763 | 0.611111 | 0.671756 |
| Hungarian (hun) | 0.876812 | 0.846154 | 0.861210 |
| Armenian (hye) | 0.988201 | 0.917808 | 0.951705 |
| Igbo (ibo) | 0.825397 | 0.696429 | 0.755448 |
| Ido (ido) | 0.760479 | 0.814103 | 0.786378 |
| Interlingue (ile) | 0.701299 | 0.580645 | 0.635294 |
| Iloko (ilo) | 0.688356 | 0.844538 | 0.758491 |
| Interlingua (ina) | 0.577889 | 0.588235 | 0.583016 |
| Indonesian (ind) | 0.415879 | 0.514019 | 0.459770 |
| Icelandic (isl) | 0.855263 | 0.790754 | 0.821745 |
| Italian (ita) | 0.474576 | 0.561247 | 0.514286 |
| Jamaican Patois (jam) | 0.826087 | 0.791667 | 0.808511 |
| Javanese (jav) | 0.670130 | 0.658163 | 0.664093 |
| Lojban (jbo) | 0.896861 | 0.917431 | 0.907029 |
| Japanese (jpn) | 0.931373 | 0.848214 | 0.887850 |
| Karakalpak (kaa) | 0.790393 | 0.827744 | 0.808637 |
| Kabyle (kab) | 0.828571 | 0.759162 | 0.792350 |
| Kannada (kan) | 0.879357 | 0.847545 | 0.863158 |
| Georgian (kat) | 0.916399 | 0.907643 | 0.912000 |
| Kazakh (kaz) | 0.900901 | 0.819672 | 0.858369 |
| Kabardian (kbd) | 0.923345 | 0.892256 | 0.907534 |
| Central Khmer (khm) | 0.976667 | 0.816156 | 0.889226 |
| Kinyarwanda (kin) | 0.824324 | 0.726190 | 0.772152 |
| Kirghiz (kir) | 0.674766 | 0.779698 | 0.723447 |
| Komi-Permyak (koi) | 0.652830 | 0.633700 | 0.643123 |
| Konkani (kok) | 0.778865 | 0.728938 | 0.753075 |
| Komi (kom) | 0.737374 | 0.572549 | 0.644592 |
| Korean (kor) | 0.984615 | 0.967603 | 0.976035 |
| Karachay-Balkar (krc) | 0.869416 | 0.857627 | 0.863481 |
| Ripuarisch (ksh) | 0.709859 | 0.649485 | 0.678331 |
| Kurdish (kur) | 0.883777 | 0.862884 | 0.873206 |
| Ladino (lad) | 0.660920 | 0.576441 | 0.615797 |
| Lao (lao) | 0.986175 | 0.918455 | 0.951111 |
| Latin (lat) | 0.581250 | 0.636986 | 0.607843 |
| Latvian (lav) | 0.824513 | 0.797844 | 0.810959 |
| Lezghian (lez) | 0.898955 | 0.793846 | 0.843137 |
| Ligurian (lij) | 0.662903 | 0.677100 | 0.669927 |
| Limburgan (lim) | 0.615385 | 0.581818 | 0.598131 |
| Lingala (lin) | 0.836207 | 0.763780 | 0.798354 |
| Lithuanian (lit) | 0.756329 | 0.804714 | 0.779772 |
| Lombard (lmo) | 0.556818 | 0.536986 | 0.546722 |
| Northern Luri (lrc) | 0.838574 | 0.753296 | 0.793651 |
| Latgalian (ltg) | 0.759531 | 0.755102 | 0.757310 |
| Luxembourgish (ltz) | 0.645062 | 0.614706 | 0.629518 |
| Luganda (lug) | 0.787535 | 0.805797 | 0.796562 |
| Literary Chinese (lzh) | 0.921951 | 0.949749 | 0.935644 |
| Maithili (mai) | 0.777778 | 0.761658 | 0.769634 |
| Malayalam (mal) | 0.993377 | 0.949367 | 0.970874 |
| Banyumasan (map-bms) | 0.531429 | 0.453659 | 0.489474 |
| Marathi (mar) | 0.748744 | 0.818681 | 0.782152 |
| Moksha (mdf) | 0.728745 | 0.800000 | 0.762712 |
| Eastern Mari (mhr) | 0.790323 | 0.760870 | 0.775316 |
| Minangkabau (min) | 0.953271 | 0.886957 | 0.918919 |
| Macedonian (mkd) | 0.816399 | 0.849722 | 0.832727 |
| Malagasy (mlg) | 0.925187 | 0.918317 | 0.921739 |
| Maltese (mlt) | 0.869421 | 0.890017 | 0.879599 |
| Min Nan Chinese (nan) | 0.743707 | 0.820707 | 0.780312 |
| Mongolian (mon) | 0.852194 | 0.838636 | 0.845361 |
| Maori (mri) | 0.934726 | 0.937173 | 0.935948 |
| Western Mari (mrj) | 0.818792 | 0.827119 | 0.822934 |
| Malay (msa) | 0.508065 | 0.376119 | 0.432247 |
| Mirandese (mwl) | 0.650407 | 0.685225 | 0.667362 |
| Burmese (mya) | 0.995968 | 0.972441 | 0.984064 |
| Erzya (myv) | 0.475783 | 0.503012 | 0.489019 |
| Mazanderani (mzn) | 0.775362 | 0.701639 | 0.736661 |
| Neapolitan (nap) | 0.628993 | 0.595349 | 0.611708 |
| Navajo (nav) | 0.955882 | 0.937500 | 0.946602 |
| Classical Nahuatl (nci) | 0.679758 | 0.589005 | 0.631136 |
| Low German (nds) | 0.669789 | 0.690821 | 0.680143 |
| West Low German (nds-nl) | 0.513889 | 0.504545 | 0.509174 |
| Nepali (macrolanguage) (nep) | 0.640476 | 0.649758 | 0.645084 |
| Newari (new) | 0.928571 | 0.745902 | 0.827273 |
| Dutch (nld) | 0.553763 | 0.553763 | 0.553763 |
| Norwegian Nynorsk (nno) | 0.569277 | 0.519231 | 0.543103 |
| Bokmål (nob) | 0.519856 | 0.562500 | 0.540338 |
| Narom (nrm) | 0.691275 | 0.605882 | 0.645768 |
| Northern Sotho (nso) | 0.950276 | 0.815166 | 0.877551 |
| Occitan (oci) | 0.483444 | 0.366834 | 0.417143 |
| Livvi-Karelian (olo) | 0.816850 | 0.790780 | 0.803604 |
| Oriya (ori) | 0.981481 | 0.963636 | 0.972477 |
| Oromo (orm) | 0.885714 | 0.829218 | 0.856536 |
| Ossetian (oss) | 0.822006 | 0.855219 | 0.838284 |
| Pangasinan (pag) | 0.842105 | 0.715655 | 0.773748 |
| Pampanga (pam) | 0.770000 | 0.435028 | 0.555957 |
| Panjabi (pan) | 0.996154 | 0.984791 | 0.990440 |
| Papiamento (pap) | 0.674672 | 0.661670 | 0.668108 |
| Picard (pcd) | 0.407895 | 0.356322 | 0.380368 |
| Pennsylvania German (pdc) | 0.487047 | 0.509485 | 0.498013 |
| Palatine German (pfl) | 0.614173 | 0.570732 | 0.591656 |
| Western Panjabi (pnb) | 0.926267 | 0.887417 | 0.906426 |
| Polish (pol) | 0.797059 | 0.734417 | 0.764457 |
| Portuguese (por) | 0.500914 | 0.586724 | 0.540434 |
| Pushto (pus) | 0.941489 | 0.898477 | 0.919481 |
| Quechua (que) | 0.854167 | 0.797665 | 0.824950 |
| Tarantino dialect (roa-tara) | 0.669794 | 0.724138 | 0.695906 |
| Romansh (roh) | 0.745527 | 0.760649 | 0.753012 |
| Romanian (ron) | 0.805486 | 0.769048 | 0.786845 |
| Rusyn (rue) | 0.718543 | 0.645833 | 0.680251 |
| Aromanian (rup) | 0.288482 | 0.730245 | 0.413580 |
| Russian (rus) | 0.530120 | 0.690583 | 0.599805 |
| Yakut (sah) | 0.853521 | 0.865714 | 0.859574 |
| Sanskrit (san) | 0.931343 | 0.896552 | 0.913616 |
| Sicilian (scn) | 0.734139 | 0.618321 | 0.671271 |
| Scots (sco) | 0.571429 | 0.540816 | 0.555701 |
| Samogitian (sgs) | 0.829167 | 0.748120 | 0.786561 |
| Sinhala (sin) | 0.909474 | 0.935065 | 0.922092 |
| Slovak (slk) | 0.738235 | 0.665782 | 0.700139 |
| Slovene (slv) | 0.671123 | 0.662269 | 0.666667 |
| Northern Sami (sme) | 0.800676 | 0.825784 | 0.813036 |
| Shona (sna) | 0.761702 | 0.724696 | 0.742739 |
| Sindhi (snd) | 0.950172 | 0.946918 | 0.948542 |
| Somali (som) | 0.849462 | 0.802030 | 0.825065 |
| Spanish (spa) | 0.325234 | 0.413302 | 0.364017 |
| Albanian (sqi) | 0.875899 | 0.832479 | 0.853637 |
| Sardinian (srd) | 0.750000 | 0.711061 | 0.730012 |
| Sranan (srn) | 0.888889 | 0.771084 | 0.825806 |
| Serbian (srp) | 0.824561 | 0.814356 | 0.819427 |
| Saterfriesisch (stq) | 0.790087 | 0.734417 | 0.761236 |
| Sundanese (sun) | 0.764192 | 0.631769 | 0.691700 |
| Swahili (macrolanguage) (swa) | 0.763496 | 0.796247 | 0.779528 |
| Swedish (swe) | 0.838284 | 0.723647 | 0.776758 |
| Silesian (szl) | 0.819788 | 0.750809 | 0.783784 |
| Tamil (tam) | 0.985765 | 0.955172 | 0.970228 |
| Tatar (tat) | 0.469780 | 0.795349 | 0.590674 |
| Tulu (tcy) | 0.893300 | 0.873786 | 0.883436 |
| Telugu (tel) | 1.000000 | 0.913690 | 0.954899 |
| Tetum (tet) | 0.765116 | 0.744344 | 0.754587 |
| Tajik (tgk) | 0.828418 | 0.813158 | 0.820717 |
| Tagalog (tgl) | 0.751468 | 0.757396 | 0.754420 |
| Thai (tha) | 0.933884 | 0.807143 | 0.865900 |
| Tongan (ton) | 0.920245 | 0.923077 | 0.921659 |
| Tswana (tsn) | 0.873397 | 0.889070 | 0.881164 |
| Turkmen (tuk) | 0.898438 | 0.837887 | 0.867107 |
| Turkish (tur) | 0.666667 | 0.716981 | 0.690909 |
| Tuvan (tyv) | 0.857143 | 0.805063 | 0.830287 |
| Udmurt (udm) | 0.865517 | 0.756024 | 0.807074 |
| Uighur (uig) | 0.991597 | 0.967213 | 0.979253 |
| Ukrainian (ukr) | 0.771341 | 0.702778 | 0.735465 |
| Urdu (urd) | 0.877647 | 0.855505 | 0.866434 |
| Uzbek (uzb) | 0.655652 | 0.797040 | 0.719466 |
| Venetian (vec) | 0.611111 | 0.527233 | 0.566082 |
| Veps (vep) | 0.672862 | 0.688213 | 0.680451 |
| Vietnamese (vie) | 0.932406 | 0.914230 | 0.923228 |
| Vlaams (vls) | 0.594427 | 0.501305 | 0.543909 |
| Volapük (vol) | 0.765625 | 0.942308 | 0.844828 |
| Võro (vro) | 0.797203 | 0.740260 | 0.767677 |
| Waray (war) | 0.930876 | 0.930876 | 0.930876 |
| Walloon (wln) | 0.636804 | 0.693931 | 0.664141 |
| Wolof (wol) | 0.864220 | 0.845601 | 0.854809 |
| Wu Chinese (wuu) | 0.848921 | 0.830986 | 0.839858 |
| Xhosa (xho) | 0.837398 | 0.759214 | 0.796392 |
| Mingrelian (xmf) | 0.943396 | 0.874126 | 0.907441 |
| Yiddish (yid) | 0.955729 | 0.897311 | 0.925599 |
| Yoruba (yor) | 0.812010 | 0.719907 | 0.763190 |
| Zeeuws (zea) | 0.617737 | 0.550409 | 0.582133 |
| Cantonese (zh-yue) | 0.859649 | 0.649007 | 0.739623 |
| Standard Chinese (zho) | 0.845528 | 0.781955 | 0.812500 |
| accuracy | 0.749527 | 0.749527 | 0.749527 |
| macro avg | 0.762866 | 0.742101 | 0.749261 |
| weighted avg | 0.762006 | 0.749527 | 0.752910 |
## Questions?
Post a Github issue from [HERE](https://github.com/m3hrdadfi/zabanshenas/issues). |
manishiitg/output | 5ddc5ec9f5bf098c3e9de99c27773112ce34c510 | 2021-05-20T17:44:39.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | manishiitg | null | manishiitg/output | 15 | null | transformers | 9,539 | Entry not found |
mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili | ab8ad3c070d1fea9b697c2870129bca2d4a6f760 | 2021-11-25T09:04:07.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"sw",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
]
| token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili | 15 | null | transformers | 9,540 | ---
language:
- sw
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
---
# xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-kinyarwanda](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Swahili part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili) (This model) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 |
| [xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | swa | 87.75 | 86.55 | 88.97 | 85.00 | 92.00 | 77.00 | 91.00 |
| [xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | swa | 88.93 | 87.64 | 90.25 | 83.00 | 92.00 | 79.00 | 95.00 |
| [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 |
| [xlm-roberta-base-finetuned-naija-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | swa | 90.36 | 88.59 | 92.20 | 86.00 | 93.00 | 79.00 | 96.00 |
| [xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | swa | 87.80 | 86.50 | 89.14 | 86.00 | 90.00 | 78.00 | 93.00 |
| [xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | swa | 87.73 | 86.67 | 88.80 | 85.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-swahili) | [base](https://huggingface.co/xlm-roberta-base) | swa | 88.71 | 86.84 | 90.67 | 83.00 | 91.00 | 79.00 | 95.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-ner-swahili | d09c7418ffdc293886fb957cc014eadacd60718b | 2021-11-25T09:04:40.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"sw",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
]
| token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-ner-swahili | 15 | 1 | transformers | 9,541 | ---
language:
- sw
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
---
# xlm-roberta-base-finetuned-ner-swahili
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Swahili part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-swahili) (This model) | [base](https://huggingface.co/xlm-roberta-base) | swa | 88.71 | 86.84 | 90.67 | 83.00 | 91.00 | 79.00 | 95.00 |
| [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 |
| [xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | swa | 87.75 | 86.55 | 88.97 | 85.00 | 92.00 | 77.00 | 91.00 |
| [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | swa | 88.93 | 87.64 | 90.25 | 83.00 | 92.00 | 79.00 | 95.00 |
| [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 |
| [xlm-roberta-base-finetuned-naija-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | swa | 90.36 | 88.59 | 92.20 | 86.00 | 93.00 | 79.00 | 96.00 |
| [xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | swa | 87.80 | 86.50 | 89.14 | 86.00 | 90.00 | 78.00 | 93.00 |
| [xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | swa | 87.73 | 86.67 | 88.80 | 85.00 | 91.00 | 75.00 | 93.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-swahili'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luo | 728511ffd5d6856f5d75db779b2719d53ab75f07 | 2021-11-25T09:04:58.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"luo",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
]
| token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luo | 15 | null | transformers | 9,542 | ---
language:
- luo
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Jii 2 moko jowito ngimagi ka machielo 1 to ohinyore marach mokalo e masira makoch mar apaya mane otimore e apaya mawuok Oyugis kochimo Chabera e sub county ma Rachuonyo East e County ma Homa Bay ewii odhiambo makawuononi"
---
# xlm-roberta-base-finetuned-swahili-finetuned-ner-luo
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-swahili](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Luo part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luo) (This model) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | luo | 78.13 | 77.75 | 78.52 | 65.00 | 82.00 | 61.00 | 89.00 |
| [xlm-roberta-base-finetuned-luo-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-luo) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | luo | 78.71 | 78.91 | 78.52 | 72.00 | 84.00 | 59.00 | 87.00 |
| [xlm-roberta-base-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-luo) | [base](https://huggingface.co/xlm-roberta-base) | luo | 75.99 | 76.18 | 75.80 | 71.00 | 76.00 | 62.00 | 85.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luo'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Jii 2 moko jowito ngimagi ka machielo 1 to ohinyore marach mokalo e masira makoch mar apaya mane otimore e apaya mawuok Oyugis kochimo Chabera e sub county ma Rachuonyo East e County ma Homa Bay ewii odhiambo makawuononi"
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili | 80a0b4230b7b706e50ce5038e9f18b21f44c1198 | 2021-11-25T09:05:15.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"sw",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
]
| token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili | 15 | null | transformers | 9,543 | ---
language:
- sw
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
---
# xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-yoruba](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Swahili part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili) (This model) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | swa | 87.73 | 86.67 | 88.80 | 85.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 |
| [xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | swa | 87.75 | 86.55 | 88.97 | 85.00 | 92.00 | 77.00 | 91.00 |
| [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | swa | 88.93 | 87.64 | 90.25 | 83.00 | 92.00 | 79.00 | 95.00 |
| [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 |
| [xlm-roberta-base-finetuned-naija-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | swa | 90.36 | 88.59 | 92.20 | 86.00 | 93.00 | 79.00 | 96.00 |
| [xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | swa | 87.80 | 86.50 | 89.14 | 86.00 | 90.00 | 78.00 | 93.00 |
| [xlm-roberta-base-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-swahili) | [base](https://huggingface.co/xlm-roberta-base) | swa | 88.71 | 86.84 | 90.67 | 83.00 | 91.00 | 79.00 | 95.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
ner_results = nlp(example)
print(ner_results)
```
|
meghanabhange/hinglish-sentence-bert | f7e2387d18a11062ab0ba7eb2919ee70d41d3795 | 2021-05-19T23:17:18.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | meghanabhange | null | meghanabhange/hinglish-sentence-bert | 15 | null | transformers | 9,544 | Entry not found |
midas/gupshup_e2e_pegasus | 4ddf0da7354c31ae27cdce2436ba6b87c6d21537 | 2021-11-14T02:07:37.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"arxiv:1910.04073",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | midas | null | midas/gupshup_e2e_pegasus | 15 | null | transformers | 9,545 | # Gupshup
GupShup: Summarizing Open-Domain Code-Switched Conversations EMNLP 2021
Paper: [https://aclanthology.org/2021.emnlp-main.499.pdf](https://aclanthology.org/2021.emnlp-main.499.pdf)
Github: [https://github.com/midas-research/gupshup](https://github.com/midas-research/gupshup)
### Dataset
Please request for the Gupshup data using [this Google form](https://docs.google.com/forms/d/1zvUk7WcldVF3RCoHdWzQPzPprtSJClrnHoIOYbzaJEI/edit?ts=61381ec0).
Dataset is available for `Hinglish Dilaogues to English Summarization`(h2e) and `English Dialogues to English Summarization`(e2e). For each task, Dialogues/conversastion have `.source`(train.source) as file extension whereas Summary has `.target`(train.target) file extension. ".source" file need to be provided to `input_path` and ".target" file to `reference_path` argument in the scripts.
## Models
All model weights are available on the Huggingface model hub. Users can either directly download these weights in their local and provide this path to `model_name` argument in the scripts or use the provided alias (to `model_name` argument) in scripts directly; this will lead to download weights automatically by scripts.
Model names were aliased in "gupshup_TASK_MODEL" sense, where "TASK" can be h2e,e2e and MODEL can be mbart, pegasus, etc., as listed below.
**1. Hinglish Dialogues to English Summary (h2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_h2e_mbart](https://huggingface.co/midas/gupshup_h2e_mbart) |
| PEGASUS | [midas/gupshup_h2e_pegasus](https://huggingface.co/midas/gupshup_h2e_pegasus) |
| T5 MTL | [midas/gupshup_h2e_t5_mtl](https://huggingface.co/midas/gupshup_h2e_t5_mtl) |
| T5 | [midas/gupshup_h2e_t5](https://huggingface.co/midas/gupshup_h2e_t5) |
| BART | [midas/gupshup_h2e_bart](https://huggingface.co/midas/gupshup_h2e_bart) |
| GPT-2 | [midas/gupshup_h2e_gpt](https://huggingface.co/midas/gupshup_h2e_gpt) |
**2. English Dialogues to English Summary (e2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_e2e_mbart](https://huggingface.co/midas/gupshup_e2e_mbart) |
| PEGASUS | [midas/gupshup_e2e_pegasus](https://huggingface.co/midas/gupshup_e2e_pegasus) |
| T5 MTL | [midas/gupshup_e2e_t5_mtl](https://huggingface.co/midas/gupshup_e2e_t5_mtl) |
| T5 | [midas/gupshup_e2e_t5](https://huggingface.co/midas/gupshup_e2e_t5) |
| BART | [midas/gupshup_e2e_bart](https://huggingface.co/midas/gupshup_e2e_bart) |
| GPT-2 | [midas/gupshup_e2e_gpt](https://huggingface.co/midas/gupshup_e2e_gpt) |
## Inference
### Using command line
1. Clone this repo and create a python virtual environment (https://docs.python.org/3/library/venv.html). Install the required packages using
```
git clone https://github.com/midas-research/gupshup.git
pip install -r requirements.txt
```
2. run_eval script has the following arguments.
* **model_name** : Path or alias to one of our models available on Huggingface as listed above.
* **input_path** : Source file or path to file containing conversations, which will be summarized.
* **save_path** : File path where to save summaries generated by the model.
* **reference_path** : Target file or path to file containing summaries, used to calculate matrices.
* **score_path** : File path where to save scores.
* **bs** : Batch size
* **device**: Cuda devices to use.
Please make sure you have downloaded the Gupshup dataset using the above google form and provide the correct path to these files in the argument's `input_path` and `refrence_path.` Or you can simply put `test.source` and `test.target` in `data/h2e/`(hinglish to english) or `data/e2e/`(english to english) folder. For example, to generate English summaries from Hinglish dialogues using the mbart model, run the following command
```
python run_eval.py \
--model_name midas/gupshup_h2e_mbart \
--input_path data/h2e/test.source \
--save_path generated_summary.txt \
--reference_path data/h2e/test.target \
--score_path scores.txt \
--bs 8
```
Another example, to generate English summaries from English dialogues using the Pegasus model
```
python run_eval.py \
--model_name midas/gupshup_e2e_pegasus \
--input_path data/e2e/test.source \
--save_path generated_summary.txt \
--reference_path data/e2e/test.target \
--score_path scores.txt \
--bs 8
```
Please create an issue if you are facing any difficulties in replicating the results.
### References
Please cite [[1]](https://arxiv.org/abs/1910.04073) if you found the resources in this repository useful.
[1] Mehnaz, Laiba, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle G. Lee, Anish Acharya, and Rajiv Shah. [*GupShup: Summarizing Open-Domain Code-Switched Conversations*](https://aclanthology.org/2021.emnlp-main.499.pdf)
```
@inproceedings{mehnaz2021gupshup,
title={GupShup: Summarizing Open-Domain Code-Switched Conversations},
author={Mehnaz, Laiba and Mahata, Debanjan and Gosangi, Rakesh and Gunturi, Uma Sushmitha and Jain, Riya and Gupta, Gauri and Kumar, Amardeep and Lee, Isabelle G and Acharya, Anish and Shah, Rajiv},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
pages={6177--6192},
year={2021}
}
```
|
mmcquade11/reviews-sentiment-analysis | 312c90a7ffe06f57619b45485e136e2c22c973b1 | 2021-12-01T18:52:49.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | mmcquade11 | null | mmcquade11/reviews-sentiment-analysis | 15 | null | transformers | 9,546 | Entry not found |
mofawzy/bert-ajgt | 2b31eb1e999581c5ccc7cf3c7d47ca3d2f9f50a0 | 2022-02-17T19:56:26.000Z | [
"pytorch",
"bert",
"text-classification",
"ar",
"dataset:AJGT",
"transformers",
"AJGT"
]
| text-classification | false | mofawzy | null | mofawzy/bert-ajgt | 15 | null | transformers | 9,547 | ---
language:
- ar
datasets:
- AJGT
tags:
- AJGT
widget:
- text: "يهدي الله من يشاء"
- text: "الاسلوب قذر وقمامه"
---
# BERT-AJGT
Arabic version bert model fine tuned on AJGT dataset
## Data
The model were fine-tuned on ~1800 sentence from twitter for Jordanian dialect.
## Results
| class | precision | recall | f1-score | Support |
|----------|-----------|--------|----------|---------|
| 0 | 0.9462 | 0.9778 | 0.9617 | 90 |
| 1 | 0.9399 | 0.9689 | 0.9542 | 90 |
| Accuracy | | | 0.9611 | 180 |
## How to use
You can use these models by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name="mofawzy/bert-ajgt"
model = AutoModelForSequenceClassification.from_pretrained(model_name,num_labels=2)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
|
monologg/koelectra-small-generator | a7d050043fdb98b63ebcb747df7004b5b94dc3b8 | 2020-12-26T16:23:42.000Z | [
"pytorch",
"electra",
"fill-mask",
"ko",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | monologg | null | monologg/koelectra-small-generator | 15 | null | transformers | 9,548 | ---
language: ko
---
# KoELECTRA (Small Generator)
Pretrained ELECTRA Language Model for Korean (`koelectra-small-generator`)
For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md).
## Usage
### Load model and tokenizer
```python
>>> from transformers import ElectraModel, ElectraTokenizer
>>> model = ElectraModel.from_pretrained("monologg/koelectra-small-generator")
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-small-generator")
```
### Tokenizer example
```python
>>> from transformers import ElectraTokenizer
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-small-generator")
>>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]")
['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]'])
[2, 18429, 41, 6240, 15229, 6204, 20894, 5689, 12622, 10690, 18, 3]
```
## Example using ElectraForMaskedLM
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="monologg/koelectra-small-generator",
tokenizer="monologg/koelectra-small-generator"
)
print(fill_mask("나는 {} 밥을 먹었다.".format(fill_mask.tokenizer.mask_token)))
```
|
monsoon-nlp/es-seq2seq-gender-decoder | e271163d2c94e74e2ebba5315c4c0d1e7e598ac2 | 2021-05-20T00:09:13.000Z | [
"pytorch",
"bert",
"text-generation",
"es",
"transformers"
]
| text-generation | false | monsoon-nlp | null | monsoon-nlp/es-seq2seq-gender-decoder | 15 | null | transformers | 9,549 | ---
language: es
---
# es-seq2seq-gender (decoder)
This is a seq2seq model (decoder half) to "flip" gender in Spanish sentences.
The model can augment your existing Spanish data, or generate counterfactuals
to test a model's decisions (would changing the gender of the subject or speaker change output?).
Intended Examples:
- el profesor viejo => la profesora vieja (article, noun, adjective all flip)
- una actriz => un actor (irregular noun)
- el lingüista => la lingüista (irregular noun)
- la biblioteca => la biblioteca (no person, no flip)
People's names are unchanged in this version, but you can use packages
such as https://pypi.org/project/gender-guesser/
## Sample code
https://colab.research.google.com/drive/1Ta_YkXx93FyxqEu_zJ-W23PjPumMNHe5
```
import torch
from transformers import AutoTokenizer, EncoderDecoderModel
model = EncoderDecoderModel.from_encoder_decoder_pretrained("monsoon-nlp/es-seq2seq-gender-encoder", "monsoon-nlp/es-seq2seq-gender-decoder")
tokenizer = AutoTokenizer.from_pretrained('monsoon-nlp/es-seq2seq-gender-decoder') # all are same as BETO uncased original
input_ids = torch.tensor(tokenizer.encode("la profesora vieja")).unsqueeze(0)
generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id)
tokenizer.decode(generated.tolist()[0])
> '[PAD] el profesor viejo profesor viejo profesor...'
```
## Training
I originally developed
<a href="https://github.com/MonsoonNLP/el-la">a gender flip Python script</a>
with
<a href="https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased">BETO</a>,
the Spanish-language BERT from Universidad de Chile,
and spaCy to parse dependencies in sentences.
More about this project: https://medium.com/ai-in-plain-english/gender-bias-in-spanish-bert-1f4d76780617
The seq2seq model is trained on gender-flipped text from that script run on the
<a href="https://huggingface.co/datasets/muchocine">muchocine dataset</a>,
and the first 6,853 lines from the
<a href="https://oscar-corpus.com/">OSCAR corpus</a>
(Spanish ded-duped).
The encoder and decoder started with weights and vocabulary from BETO (uncased).
## Non-binary gender
This model is useful to generate male and female text samples, but falls
short of capturing gender diversity in the world and in the Spanish
language. Some communities prefer the plural -@s to represent
-os and -as, or -e and -es for gender-neutral or mixed-gender plural,
or use fewer gendered professional nouns (la juez and not jueza). This is not yet
embraced by the Royal Spanish Academy
and is not represented in the corpora and tokenizers used to build this project.
This seq2seq project and script could, in the future, help generate more text samples
and prepare NLP models to understand us all better.
#### Sources
- https://www.nytimes.com/2020/04/15/world/americas/argentina-gender-language.html
- https://www.washingtonpost.com/dc-md-va/2019/12/05/teens-argentina-are-leading-charge-gender-neutral-language/?arc404=true
- https://www.theguardian.com/world/2020/jan/19/gender-neutral-language-battle-spain
- https://es.wikipedia.org/wiki/Lenguaje_no_sexista
- https://remezcla.com/culture/argentine-company-re-imagines-little-prince-gender-neutral-language/
|
monsoon-nlp/gpt-nyc | b49baf5fe2c2c03d309cbf681fd630ff15c564b9 | 2021-05-23T10:03:21.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | monsoon-nlp | null | monsoon-nlp/gpt-nyc | 15 | 1 | transformers | 9,550 | # GPT-NYC
## About
GPT2-Medium fine-tuned on questions and responses from https://reddit.com/r/asknyc
I filtered comments to ones with scores >= 3, and responding directly
to the original post ( = ignoring responses to other commenters).
I added tokens to match NYC neighborhoods, subway stations, foods, and other
common terms in the original batches of questions and comments.
You would be surprised what is missing from GPT tokens!
Try prompting with ```question? %% ``` or ```question? - more info %%```
## Status
I would like to continue by:
- fine-tuning GPT2-Large with a larger dataset of questions
- examining bias and toxicity
- examining memorization vs. original responses
- releasing a reusable benchmark
## Blog
https://mapmeld.medium.com/gpt-nyc-part-1-9cb698b2e3d
## Notebooks
### Data processing / new tokens
https://colab.research.google.com/drive/13BOw0uekoAYB4jjQtaXTn6J_VHatiRLu
### Fine-tuning GPT2 (small)
https://colab.research.google.com/drive/1FnXcAh4H-k8dAzixkV5ieygV96ePh3lR
### Fine-tuning GPT2-Medium
Same code as small, but on Google Cloud to use an A100 GPU
### Predictive text and probabilities
Scroll to end of
https://colab.research.google.com/drive/1FnXcAh4H-k8dAzixkV5ieygV96ePh3lR
to see how to install git-lfs and trick ecco into loading this.
|
mrm8488/GPT-2-finetuned-CRD3 | ce6e1457142a7aa61c564e5e32364f40f8cd3201 | 2021-05-23T10:10:58.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | mrm8488 | null | mrm8488/GPT-2-finetuned-CRD3 | 15 | null | transformers | 9,551 | Entry not found |
mrm8488/spanbert-base-finetuned-squadv1 | 54aff17ce703bd24116984d9abbd019c06253159 | 2021-05-20T00:49:33.000Z | [
"pytorch",
"jax",
"bert",
"en",
"arxiv:1907.10529",
"transformers"
]
| null | false | mrm8488 | null | mrm8488/spanbert-base-finetuned-squadv1 | 15 | null | transformers | 9,552 | ---
language: en
thumbnail:
---
# SpanBERT base fine-tuned on SQuAD v1
[SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [SQuAD 1.1](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task ([by them](https://github.com/facebookresearch/SpanBERT#finetuned-models-squad-1120-relation-extraction-coreference-resolution)).
## Details of SpanBERT
[SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529)
## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓
[SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/)
## Model fine-tuning 🏋️
You can get the fine-tuning script [here](https://github.com/facebookresearch/SpanBERT)
```bash
python code/run_squad.py \
--do_train \
--do_eval \
--model spanbert-base-cased \
--train_file train-v1.1.json \
--dev_file dev-v1.1.json \
--train_batch_size 32 \
--eval_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 4 \
--max_seq_length 512 \
--doc_stride 128 \
--eval_metric f1 \
--output_dir squad_output \
--fp16
```
## Results Comparison 📝
| | SQuAD 1.1 | SQuAD 2.0 | Coref | TACRED |
| ---------------------- | ------------- | --------- | ------- | ------ |
| | F1 | F1 | avg. F1 | F1 |
| BERT (base) | 88.5 | 76.5 | 73.1 | 67.7 |
| SpanBERT (base) | **92.4** (this one) | [83.6](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv2) | 77.4 | [68.2](https://huggingface.co/mrm8488/spanbert-base-finetuned-tacred) |
| BERT (large) | 91.3 | 83.3 | 77.1 | 66.4 |
| SpanBERT (large) | [94.6](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv1) | [88.7](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv2) | 79.6 | [70.8](https://huggingface.co/mrm8488/spanbert-large-finetuned-tacred) |
Note: The numbers marked as * are evaluated on the development sets because those models were not submitted to the official SQuAD leaderboard. All the other numbers are test numbers.
## Model in action
Fast usage with **pipelines**:
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="mrm8488/spanbert-base-finetuned-squadv1",
tokenizer="SpanBERT/spanbert-base-cased"
)
qa_pipeline({
'context': "Manuel Romero has been working very hard in the repository hugginface/transformers lately",
'question': "How has been working Manuel Romero lately?"
})
# Output: {'answer': 'very hard in the repository hugginface/transformers',
'end': 82,
'score': 0.327230326857725,
'start': 31}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
ncoop57/cm_codeparrot | e21e05b4fa4c25b96a6f18f8ff8097628257550d | 2022-03-02T12:59:23.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | false | ncoop57 | null | ncoop57/cm_codeparrot | 15 | null | transformers | 9,553 | Entry not found |
nielsr/dino_deits8 | 161957bc7e0712f3c5bd5490e62ae9c70678ae4c | 2021-05-03T08:17:02.000Z | [
"pytorch",
"vit",
"feature-extraction",
"transformers"
]
| feature-extraction | false | nielsr | null | nielsr/dino_deits8 | 15 | null | transformers | 9,554 | Entry not found |
orzhan/t5-long-extract | 9b20f3c309b8cbea80d2609bbd5a638d1c7a7385 | 2022-06-11T07:20:59.000Z | [
"pytorch",
"t5",
"feature-extraction",
"transformers"
]
| feature-extraction | false | orzhan | null | orzhan/t5-long-extract | 15 | null | transformers | 9,555 | T5-small model fine-tuned for extractive summarization on long documents.
Repository: [GitHub](https://github.com/orzhan/t5-long-extract) |
patrickvonplaten/hf-reformer-crime-and-punish | 498eddd2421bd1902a147e550ffae000d7a60d55 | 2020-05-11T11:10:52.000Z | [
"pytorch",
"reformer",
"text-generation",
"transformers"
]
| text-generation | false | patrickvonplaten | null | patrickvonplaten/hf-reformer-crime-and-punish | 15 | null | transformers | 9,556 | Entry not found |
patrickvonplaten/wav2vec2-common_voice-tr-demo | 59074c9bbe282a9c02a6422082d16fcc4aa46307 | 2021-12-20T12:54:39.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"speech-recognition",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-common_voice-tr-demo | 15 | null | transformers | 9,557 | ---
language:
- tr
license: apache-2.0
tags:
- speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-tr-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tr-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3856
- Wer: 0.3556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7391 | 0.92 | 100 | 3.5760 | 1.0 |
| 2.927 | 1.83 | 200 | 3.0796 | 0.9999 |
| 0.9009 | 2.75 | 300 | 0.9278 | 0.8226 |
| 0.6529 | 3.67 | 400 | 0.5926 | 0.6367 |
| 0.3623 | 4.59 | 500 | 0.5372 | 0.5692 |
| 0.2888 | 5.5 | 600 | 0.4407 | 0.4838 |
| 0.285 | 6.42 | 700 | 0.4341 | 0.4694 |
| 0.0842 | 7.34 | 800 | 0.4153 | 0.4302 |
| 0.1415 | 8.26 | 900 | 0.4317 | 0.4136 |
| 0.1552 | 9.17 | 1000 | 0.4145 | 0.4013 |
| 0.1184 | 10.09 | 1100 | 0.4115 | 0.3844 |
| 0.0556 | 11.01 | 1200 | 0.4182 | 0.3862 |
| 0.0851 | 11.93 | 1300 | 0.3985 | 0.3688 |
| 0.0961 | 12.84 | 1400 | 0.4030 | 0.3665 |
| 0.0596 | 13.76 | 1500 | 0.3880 | 0.3631 |
| 0.0917 | 14.68 | 1600 | 0.3878 | 0.3582 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
peggyhuang/bert-base-uncased-coqa | ae54d05fa6b4c7b6c04a9cd28c1fd26bfd23d4fa | 2021-11-19T09:05:00.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | peggyhuang | null | peggyhuang/bert-base-uncased-coqa | 15 | null | transformers | 9,558 | Entry not found |
pinecone/bert-retriever-squad2 | edb4465f3fc105473bf54cb7d4674d0d043d9185 | 2022-01-03T02:38:02.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | pinecone | null | pinecone/bert-retriever-squad2 | 15 | null | sentence-transformers | 9,559 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 5429 with parameters:
```
{'batch_size': 24}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 542,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
pkushiqiang/bert-title-org | 113cf90e11c9920cc55c2a885813d30166991300 | 2022-02-28T06:16:37.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | pkushiqiang | null | pkushiqiang/bert-title-org | 15 | null | transformers | 9,560 | Entry not found |
pmthangk09/bert-base-uncased-superglue-multirc | 5b503587b2e7cee15046475079d14a13aff1e616 | 2021-05-20T02:50:34.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | pmthangk09 | null | pmthangk09/bert-base-uncased-superglue-multirc | 15 | null | transformers | 9,561 | Entry not found |
pszemraj/t5-base-askscience | a7582c22c55096ed39fe3261852d94d767e1da98 | 2022-02-19T22:50:18.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:eli5",
"transformers",
"qa",
"askscience",
"lfqa",
"information retrieval",
"autotrain_compatible"
]
| text2text-generation | false | pszemraj | null | pszemraj/t5-base-askscience | 15 | null | transformers | 9,562 | ---
language:
- en
tags:
- t5
- qa
- askscience
- lfqa
- information retrieval
datasets:
- eli5
metrics:
- rouge
widget:
- text: "why aren't there more planets in our solar system?"
example_title: "solar system"
- text: "question: what is a probability distribution? context: I am just learning about statistics."
example_title: "probability distribution"
- text: "question: What are the underlying physical processes by which exercise helps us lose weight? context: I started working out two weeks ago and already feel a lot better, and started to think about it and became deeply confused."
example_title: "pumpen"
- text: "what is a neural network?"
example_title: "deep learning"
- text: "What are the primary mechanisms that computers use to understand human language?"
example_title: "NLP"
inference:
parameters:
max_length: 96
no_repeat_ngram_size: 2
encoder_no_repeat_ngram_size: 4
repetition_penalty: 3.51
length_penalty: 0.8
num_beams: 4
early_stopping: True
---
# t5 - base- askscience
- [t5-v1_1](https://huggingface.co/google/t5-v1_1-base) trained on the entirety of the _askscience_ sub-section of the eli5 dataset for one epoch.
- compare to bart on eli5 [here](https://huggingface.co/yjernite/bart_eli5)
- note that for the inference API, the model is restricted to outputting 96 tokens - by using the model in python with the transformers library, you can get longer outputs.
## training
- for inputs, the model was presented with the post title and the post selftext encoded as: `question: <post title> context: <post selftext>`. You may see better results if queries are posed in this fashion.
- The top two replies were aggregated and presented to the model as the output text.
- Training for longer will be explored, but given that the dataset has 127k examples and the loss flatlines at 0.5 epochs so this model should be fairly viable. |
racai/distilbert-multi-base-romanian-cased | 099fba005eedaeda5a4aabcf6e502c2df50f58db | 2021-12-24T17:32:28.000Z | [
"pytorch",
"tf",
"jax",
"distilbert",
"ro",
"dataset:oscar",
"dataset:wikipedia",
"arxiv:2112.12650",
"transformers",
"license:mit"
]
| null | false | racai | null | racai/distilbert-multi-base-romanian-cased | 15 | null | transformers | 9,563 | ---
language: ro
license: mit
datasets:
- oscar
- wikipedia
---
# Romanian DistilBERT
This repository contains the a Romanian cased version of DistilBERT (named DistilMulti-BERT-base-ro in the paper) that was obtained by distilling an ensemble of two teacher models: [dumitrescustefan/bert-base-romanian-cased-v1](https://huggingface.co/dumitrescustefan/bert-base-romanian-cased-v1) and [readerbench/RoBERT-base](https://huggingface.co/readerbench/RoBERT-base).
The model was introduced in [this paper](https://arxiv.org/abs/2112.12650). The adjacent code can be found
[here](https://github.com/racai-ai/Romanian-DistilBERT).
## Usage
```python
from transformers import AutoTokenizer, AutoModel
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained("racai/distilbert-multi-base-romanian-cased")
model = AutoModel.from_pretrained("racai/distilbert-multi-base-romanian-cased")
# tokenize a test sentence
input_ids = tokenizer.encode("Aceasta este o propoziție de test.", add_special_tokens=True, return_tensors="pt")
# run the tokens trough the model
outputs = model(input_ids)
print(outputs)
```
## Model Size
The model is 35% smaller than `bert-base-romanian-cased-v1` and 30% smaller than `RoBERT-base`.
| Model | Size (MB) | Params (Millions) |
|--------------------------------|:---------:|:----------------:|
| RoBERT-base | 441 | 114 |
| bert-base-romanian-cased-v1 | 477 | 124 |
| distilbert-multi-base-romanian-cased | 312 | 81 |
## Evaluation
We evaluated the model in comparison with its two teachers on 5 Romanian tasks:
- **UPOS**: Universal Part of Speech (F1-macro)
- **XPOS**: Extended Part of Speech (F1-macro)
- **NER**: Named Entity Recognition (F1-macro)
- **SAPN**: Sentiment Anlaysis - Positive vs Negative (Accuracy)
- **SAR**: Sentiment Analysis - Rating (F1-macro)
- **DI**: Dialect identification (F1-macro)
- **STS**: Semantic Textual Similarity (Pearson)
| Model | UPOS | XPOS | NER | SAPN | SAR | DI | STS |
|--------------------------------|:----:|:----:|:---:|:----:|:---:|:--:|:---:|
| RoBERT-base | 98.02 | 97.15 | 85.14 | 98.30 | 79.40 | 96.07 | 81.18 |
| bert-base-romanian-cased-v1 | 98.00 | 96.46 | 85.88 | 98.07 | 79.61 | 95.58 | 80.30 |
| distilbert-multi-base-romanian-cased | 98.07 | 96.83 | 83.22 | 98.11 | 79.77 | 96.18 | 80.66 |
### BibTeX entry and citation info
```bibtex
@article{avram2021distilling,
title={Distilling the Knowledge of Romanian BERTs Using Multiple Teachers},
author={Andrei-Marius Avram and Darius Catrina and Dumitru-Clementin Cercel and Mihai Dascălu and Traian Rebedea and Vasile Păiş and Dan Tufiş},
journal={ArXiv},
year={2021},
volume={abs/2112.12650}
}
``` |
ramybaly/ner_conll2003 | 8705927627b7f05803a0a150c8c369c376ad1383 | 2021-08-21T03:21:14.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | ramybaly | null | ramybaly/ner_conll2003 | 15 | null | transformers | 9,564 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: ner_conll2003
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.9772880710440217
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner_conll2003
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1495
- Precision: 0.8985
- Recall: 0.9130
- F1: 0.9057
- Accuracy: 0.9773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.423 | 1.0 | 877 | 0.0656 | 0.9158 | 0.9268 | 0.9213 | 0.9818 |
| 0.0575 | 2.0 | 1754 | 0.0574 | 0.9285 | 0.9445 | 0.9364 | 0.9847 |
| 0.0295 | 3.0 | 2631 | 0.0631 | 0.9414 | 0.9456 | 0.9435 | 0.9859 |
| 0.0155 | 4.0 | 3508 | 0.0680 | 0.9395 | 0.9467 | 0.9431 | 0.9860 |
| 0.0097 | 5.0 | 4385 | 0.0694 | 0.9385 | 0.9513 | 0.9449 | 0.9863 |
| 0.0059 | 6.0 | 5262 | 0.0743 | 0.9363 | 0.9471 | 0.9416 | 0.9860 |
| 0.0041 | 7.0 | 6139 | 0.0803 | 0.9371 | 0.9518 | 0.9444 | 0.9862 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.2
|
rebeccakoganlee/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-ner | badcc7e3abb95def4e8dac5fc7bb5610b6c8e865 | 2021-11-23T20:42:01.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | rebeccakoganlee | null | rebeccakoganlee/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-ner | 15 | null | transformers | 9,565 | Entry not found |
sarahmiller137/distilbert-base-uncased-ft-conll2003 | 97f129c62514a7e0aa79f6c8de00c17c792065a4 | 2022-07-14T11:52:53.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:conll2003",
"transformers",
"token classification",
"license:cc",
"model-index",
"autotrain_compatible"
]
| token-classification | false | sarahmiller137 | null | sarahmiller137/distilbert-base-uncased-ft-conll2003 | 15 | null | transformers | 9,566 | ---
language:
- en
thumbnail: url to a thumbnail used in social sharing
tags:
- token classification
license: cc
datasets:
- conll2003
model-index:
- name: sarahmiller137/distilbert-base-uncased-ft-conll2003
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9750189904012154
verified: true
- name: Precision
type: precision
value: 0.9802152215150602
verified: true
- name: Recall
type: recall
value: 0.9803021169462076
verified: true
- name: F1
type: f1
value: 0.9802586673049137
verified: true
- name: loss
type: loss
value: 0.10723897069692612
verified: true
---
## Model information:
distilbert-base-uncased model finetuned using the conll2003 dataset from the datasets library.
## Intended uses & limitations
This model is intended to be used for named entity recoginition tasks. The model will identify entities of persons, locations, organisations, and miscellaneous. The model will predict lables based upon the CoNLL-2003 dataset.
Note that the dataset and model may not be fully represetative or suitable for all needs it is recommended that the paper for the dataset and base model card should be reviewed before using the model -
- [CoNLL-2003](https://aclanthology.org/W03-0419)
- [distilbert](https://huggingface.co/distilbert-base-uncased)
## How to use
Load the model from the library using the following checkpoints:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sarahmiller137/distilbert-base-uncased-ft-conll2003")
model = AutoModel.from_pretrained("sarahmiller137/distilbert-base-uncased-ft-conll2003")
```
|
sentence-transformers/distilroberta-base-msmarco-v1 | b625c1f869c7869b660b456ecf8eff290d1333e3 | 2022-06-16T01:04:36.000Z | [
"pytorch",
"tf",
"roberta",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
]
| sentence-similarity | false | sentence-transformers | null | sentence-transformers/distilroberta-base-msmarco-v1 | 15 | null | sentence-transformers | 9,567 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/distilroberta-base-msmarco-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/distilroberta-base-msmarco-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/distilroberta-base-msmarco-v1')
model = AutoModel.from_pretrained('sentence-transformers/distilroberta-base-msmarco-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/distilroberta-base-msmarco-v1)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
sismetanin/sbert-ru-sentiment-rutweetcorp | b7f3c2cea37655f012af16fc852c454f3f998e64 | 2021-05-20T06:41:48.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | sismetanin | null | sismetanin/sbert-ru-sentiment-rutweetcorp | 15 | null | transformers | 9,568 | Entry not found |
skplanet/dialog-koelectra-small-discriminator | d7a060551ecc231b35236a8d7d1647876c193986 | 2021-04-13T01:15:27.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
]
| null | false | skplanet | null | skplanet/dialog-koelectra-small-discriminator | 15 | null | transformers | 9,569 | # Dialog-KoELECTRA
Github : [https://github.com/skplanet/Dialog-KoELECTRA](https://github.com/skplanet/Dialog-KoELECTRA)
## Introduction
**Dialog-KoELECTRA** is a language model specialized for dialogue. It was trained with 22GB colloquial and written style Korean text data. Dialog-ELECTRA model is made based on the [ELECTRA](https://openreview.net/pdf?id=r1xMH1BtvB) model. ELECTRA is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU.
<br>
## Released Models
We are initially releasing small version pre-trained model.
The model was trained on Korean text. We hope to release other models, such as base/large models, in the future.
| Model | Layers | Hidden Size | Params | Max<br/>Seq Len | Learning<br/>Rate | Batch Size | Train Steps |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| Dialog-KoELECTRA-Small | 12 | 256 | 14M | 128 | 1e-4 | 512 | 700K |
<br>
## Model Performance
Dialog-KoELECTRA shows strong performance in conversational downstream tasks.
| | **NSMC**<br/>(acc) | **Question Pair**<br/>(acc) | **Korean-Hate-Speech**<br/>(F1) | **Naver NER**<br/>(F1) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) |
| :--------------------- | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: |
| DistilKoBERT | 88.60 | 92.48 | 60.72 | 84.65 | 72.00 | 72.59 |
| **Dialog-KoELECTRA-Small** | **90.01** | **94.99** | **68.26** | **85.51** | **78.54** | **78.96** |
<br>
## Train Data
<table class="tg">
<thead>
<tr>
<th class="tg-c3ow"></th>
<th class="tg-c3ow">corpus name</th>
<th class="tg-c3ow">size</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-c3ow" rowspan="4">dialog</td>
<td class="tg-0pky"><a href="https://aihub.or.kr/aidata/85" target="_blank" rel="noopener noreferrer">Aihub Korean dialog corpus</a></td>
<td class="tg-c3ow" rowspan="4">7GB</td>
</tr>
<tr>
<td class="tg-0pky"><a href="https://corpus.korean.go.kr/" target="_blank" rel="noopener noreferrer">NIKL Spoken corpus</a></td>
</tr>
<tr>
<td class="tg-0pky"><a href="https://github.com/songys/Chatbot_data" target="_blank" rel="noopener noreferrer">Korean chatbot data</a></td>
</tr>
<tr>
<td class="tg-0pky"><a href="https://github.com/Beomi/KcBERT" target="_blank" rel="noopener noreferrer">KcBERT</a></td>
</tr>
<tr>
<td class="tg-c3ow" rowspan="2">written</td>
<td class="tg-0pky"><a href="https://corpus.korean.go.kr/" target="_blank" rel="noopener noreferrer">NIKL Newspaper corpus</a></td>
<td class="tg-c3ow" rowspan="2">15GB</td>
</tr>
<tr>
<td class="tg-0pky"><a href="https://github.com/lovit/namuwikitext" target="_blank" rel="noopener noreferrer">namuwikitext</a></td>
</tr>
</tbody>
</table>
<br>
## Vocabulary
We applied morpheme analysis using [huggingface_konlpy](https://github.com/lovit/huggingface_konlpy) when creating a vocabulary dictionary.
As a result of the experiment, it showed better performance than a vocabulary dictionary created without applying morpheme analysis.
<table>
<thead>
<tr>
<th>vocabulary size</th>
<th>unused token size</th>
<th>limit alphabet</th>
<th>min frequency</th>
</tr>
</thead>
<tbody>
<tr>
<td>40,000</td>
<td>500</td>
<td>6,000</td>
<td>3</td>
</tr>
</tbody>
</table>
<br>
|
spencerh/leftpartisan | 9cb2baaa91cdf3c9982ddced7c076550d9c32739 | 2021-04-23T19:27:15.000Z | [
"pytorch",
"tf",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | spencerh | null | spencerh/leftpartisan | 15 | null | transformers | 9,570 | # Text classifier using DistilBERT to determine Partisanship
## This is one of many single-class partisanship models
label_0 refers to "left" while label_1 refers to "other".
This model was trained on 40,000 articles.
### Best Practices
This model was optimized for 512 token-length text. Any text below 150 tokens will result in inaccurate results. |
superb/wav2vec2-large-superb-er | bd13d8ed2b396e23676111aacff32283c9dece5d | 2021-11-04T16:03:41.000Z | [
"pytorch",
"wav2vec2",
"audio-classification",
"en",
"dataset:superb",
"arxiv:2105.01051",
"transformers",
"speech",
"audio",
"license:apache-2.0"
]
| audio-classification | false | superb | null | superb/wav2vec2-large-superb-er | 15 | null | transformers | 9,571 | ---
language: en
datasets:
- superb
tags:
- speech
- audio
- wav2vec2
- audio-classification
license: apache-2.0
widget:
- example_title: IEMOCAP clip "happy"
src: https://cdn-media.huggingface.co/speech_samples/IEMOCAP_Ses01F_impro03_F013.wav
- example_title: IEMOCAP clip "neutral"
src: https://cdn-media.huggingface.co/speech_samples/IEMOCAP_Ses01F_impro04_F000.wav
---
# Wav2Vec2-Large for Emotion Recognition
## Model description
This is a ported version of
[S3PRL's Wav2Vec2 for the SUPERB Emotion Recognition task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/emotion).
The base model is [wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60), which is pretrained on 16kHz
sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
## Task and dataset description
Emotion Recognition (ER) predicts an emotion class for each utterance. The most widely used ER dataset
[IEMOCAP](https://sail.usc.edu/iemocap/) is adopted, and we follow the conventional evaluation protocol:
we drop the unbalanced emotion classes to leave the final four classes with a similar amount of data points and
cross-validate on five folds of the standard splits.
For the original model's training and evaluation instructions refer to the
[S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition).
## Usage examples
You can use the model via the Audio Classification pipeline:
```python
from datasets import load_dataset
from transformers import pipeline
dataset = load_dataset("anton-l/superb_demo", "er", split="session1")
classifier = pipeline("audio-classification", model="superb/wav2vec2-large-superb-er")
labels = classifier(dataset[0]["file"], top_k=5)
```
Or use the model directly:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForSequenceClassification, Wav2Vec2FeatureExtractor
def map_to_array(example):
speech, _ = librosa.load(example["file"], sr=16000, mono=True)
example["speech"] = speech
return example
# load a demo dataset and read audio files
dataset = load_dataset("anton-l/superb_demo", "er", split="session1")
dataset = dataset.map(map_to_array)
model = Wav2Vec2ForSequenceClassification.from_pretrained("superb/wav2vec2-large-superb-er")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/wav2vec2-large-superb-er")
# compute attention masks and normalize the waveform if needed
inputs = feature_extractor(dataset[:4]["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
labels = [model.config.id2label[_id] for _id in predicted_ids.tolist()]
```
## Eval results
The evaluation metric is accuracy.
| | **s3prl** | **transformers** |
|--------|-----------|------------------|
|**session1**| `0.6564` | `N/A` |
### BibTeX entry and citation info
```bibtex
@article{yang2021superb,
title={SUPERB: Speech processing Universal PERformance Benchmark},
author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others},
journal={arXiv preprint arXiv:2105.01051},
year={2021}
}
``` |
tbrasil/classificador_de_atendimento_2_classes_v1.1 | 147ae7455fb7891fbcef6e27de67badb01055d22 | 2021-08-02T17:51:56.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | tbrasil | null | tbrasil/classificador_de_atendimento_2_classes_v1.1 | 15 | null | transformers | 9,572 | Entry not found |
textattack/albert-base-v2-snli | 706e88d23c4ec5a68fbff2c1517d0da6ef7287d1 | 2020-07-06T16:36:47.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | textattack | null | textattack/albert-base-v2-snli | 15 | null | transformers | 9,573 | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the snli dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 2e-05, and a maximum sequence length of 64.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9060150375939849, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
tmills/roberta_sfda_sharpseed | 10d59130d9a12a683c1049a7573848ccc020ea1e | 2021-05-20T22:41:21.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | tmills | null | tmills/roberta_sfda_sharpseed | 15 | null | transformers | 9,574 | Entry not found |
ttop324/kogpt2jnovel | 70c6af4eba91fb32a746588fc52c33c82437c58a | 2021-11-11T07:38:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ko",
"transformers",
"license:cc-by-nc-sa-4.0"
]
| text-generation | false | ttop324 | null | ttop324/kogpt2jnovel | 15 | null | transformers | 9,575 | ---
language: ko
tags:
- gpt2
license: cc-by-nc-sa-4.0
---
korean translated japan web novel finetuned from skt/kogpt2-base-v2 |
uclanlp/plbart-single_task-all-summarization | 486692f974ac352601f3952a154d9aa9fa4bb7de | 2022-03-02T07:28:07.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | uclanlp | null | uclanlp/plbart-single_task-all-summarization | 15 | null | transformers | 9,576 | Entry not found |
ufal/byt5-small-multilexnorm2021-hr | c98026c96146a7d3d920dc64bf082f97f1900027 | 2021-10-20T12:27:34.000Z | [
"pytorch",
"t5",
"text2text-generation",
"hr",
"dataset:mc4",
"dataset:wikipedia",
"dataset:multilexnorm",
"arxiv:2105.13626",
"arxiv:1907.06292",
"transformers",
"lexical normalization",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | ufal | null | ufal/byt5-small-multilexnorm2021-hr | 15 | null | transformers | 9,577 | ---
language: hr
datasets:
- mc4
- wikipedia
- multilexnorm
tags:
- lexical normalization
license: apache-2.0
---
# Fine-tuned ByT5-small for MultiLexNorm (Croatian version)

This is the official release of the fine-tuned models for **the winning entry** to the [*W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm)* shared task](https://noisy-text.github.io/2021/multi-lexnorm.html), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages.
Our system is based on [ByT5](https://arxiv.org/abs/2105.13626), which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these fine-tuned models, we also release the source files on [GitHub](https://github.com/ufal/multilexnorm2021) and an interactive demo on [Google Colab](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing).
## How to use
The model was *not* fine-tuned in a standard sentence-to-sentence setting – instead, it was tailored to the token-to-token definition of MultiLexNorm data. Please refer to [**the interactive demo on Colab notebook**](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing) to learn how to use these models.
## How to cite
```bibtex
@inproceedings{wnut-ufal,
title= "{ÚFAL} at {MultiLexNorm} 2021: Improving Multilingual Lexical Normalization by Fine-tuning {ByT5}",
author = "Samuel, David and Straka, Milan",
booktitle = "Proceedings of the 7th Workshop on Noisy User-generated Text (W-NUT 2021)",
year = "2021",
publisher = "Association for Computational Linguistics",
address = "Punta Cana, Dominican Republic"
}
```
## ByT5 - Small
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-small).
ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-small` significantly outperforms [mt5-small](https://huggingface.co/google/mt5-small) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626)
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
|
unicamp-dl/ptt5-large-t5-vocab | c213b7615a0ecd776dfe6f2d95fcaca06fd03647 | 2021-06-23T14:32:15.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"pt",
"dataset:brWaC",
"transformers",
"tensorflow",
"pt-br",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | unicamp-dl | null | unicamp-dl/ptt5-large-t5-vocab | 15 | null | transformers | 9,578 | ---
language: pt
license: mit
tags:
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- brWaC
widget:
- text: "Texto de exemplo em português"
inference: false
---
# Portuguese T5 (aka "PTT5")
## Introduction
PTT5 is a T5 model pretrained in the BrWac corpus, a large collection of web pages in Portuguese, improving T5's performance on Portuguese sentence similarity and entailment tasks. It's available in three sizes (small, base and large) and two vocabularies (Google's T5 original and ours, trained on Portuguese Wikipedia).
For further information or requests, please go to [PTT5 repository](https://github.com/unicamp-dl/PTT5).
## Available models
| Model | Size | #Params | Vocabulary |
| :-: | :-: | :-: | :-: |
| [unicamp-dl/ptt5-small-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-small-t5-vocab) | small | 60M | Google's T5 |
| [unicamp-dl/ptt5-base-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-base-t5-vocab) | base | 220M | Google's T5 |
| [unicamp-dl/ptt5-large-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-large-t5-vocab) | large | 740M | Google's T5 |
| [unicamp-dl/ptt5-small-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-small-portuguese-vocab) | small | 60M | Portuguese |
| **[unicamp-dl/ptt5-base-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab)** **(Recommended)** | **base** | **220M** | **Portuguese** |
| [unicamp-dl/ptt5-large-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-large-portuguese-vocab) | large | 740M | Portuguese |
## Usage
```python
# Tokenizer
from transformers import T5Tokenizer
# PyTorch (bare model, baremodel + language modeling head)
from transformers import T5Model, T5ForConditionalGeneration
# Tensorflow (bare model, baremodel + language modeling head)
from transformers import TFT5Model, TFT5ForConditionalGeneration
model_name = 'unicamp-dl/ptt5-base-portuguese-vocab'
tokenizer = T5Tokenizer.from_pretrained(model_name)
# PyTorch
model_pt = T5ForConditionalGeneration.from_pretrained(model_name)
# TensorFlow
model_tf = TFT5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use PTT5, please cite:
@article{ptt5_2020,
title={PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data},
author={Carmo, Diedre and Piau, Marcos and Campiotti, Israel and Nogueira, Rodrigo and Lotufo, Roberto},
journal={arXiv preprint arXiv:2008.09144},
year={2020}
}
|
vasudevgupta/bigbird-pegasus-large-pubmed | 5cb34a36cccf14bb7bed607bade700d74e923fc2 | 2021-05-04T11:12:55.000Z | [
"pytorch",
"bigbird_pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | vasudevgupta | null | vasudevgupta/bigbird-pegasus-large-pubmed | 15 | null | transformers | 9,579 | Moved here: https://huggingface.co/google/bigbird-pegasus-large-pubmed |
vukpetar/trocr-small-photomath | daa6f7cd6b80a9040ddb2ca4f15061652d2068cc | 2021-12-27T19:41:43.000Z | [
"pytorch",
"vision-encoder-decoder",
"arxiv:2109.10282",
"transformers"
]
| null | false | vukpetar | null | vukpetar/trocr-small-photomath | 15 | null | transformers | 9,580 | ## TrOCR (small-sized model, fine-tuned on Synthetic Math Expression Dataset)
TrOCR model fine-tuned on the Synthetic Math Expression Dataset. It was introduced in the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Li et al. and first released in [this repository](https://github.com/microsoft/unilm/tree/master/trocr).
Disclaimer: The team releasing TrOCR did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The TrOCR model is an encoder-decoder model, consisting of an image Transformer as encoder, and a text Transformer as decoder. The image encoder was initialized from the weights of BEiT, while the text decoder was initialized from the weights of RoBERTa.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Next, the Transformer text decoder autoregressively generates tokens.
## Intended uses & limitations
You can use the raw model for optical character recognition (OCR) on single text-line images. See the model hub to look for fine-tuned versions on a task that interests you.
## How to use
Here is how to use this model in PyTorch:
```python
from transformers import VisionEncoderDecoderModel, AutoFeatureExtractor, AutoTokenizer
from PIL import Image
import requests
# load image from the IAM database
url = 'https://drive.google.com/uc?export=view&id=15dUjO44YDe1Agw_Qi8MyODRHpUFaCFw-'
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
feature_extractor = AutoFeatureExtractor.from_pretrained('vukpetar/trocr-small-photomath')
tokenizer = AutoTokenizer.from_pretrained("vukpetar/trocr-small-photomath")
model = VisionEncoderDecoderModel.from_pretrained('vukpetar/trocr-small-photomath')
pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## BibTeX entry and citation info
@misc{li2021trocr,
title={TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models},
author={Minghao Li and Tengchao Lv and Lei Cui and Yijuan Lu and Dinei Florencio and Cha Zhang and Zhoujun Li and Furu Wei},
year={2021},
eprint={2109.10282},
archivePrefix={arXiv},
primaryClass={cs.CL}
} |
yazdipour/text-to-sparql-t5-small | f485542939bc21807227d766ecbb8e47007c989d | 2021-10-19T11:17:46.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | yazdipour | null | yazdipour/text-to-sparql-t5-small | 15 | null | transformers | 9,581 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
metrics:
- f1
model-index:
- name: text-to-sparql-t5-small-2021-10-19_10-17_lastDS
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metrics:
- name: F1
type: f1
value: 0.3129461705684662
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-small-2021-10-19_10-17_lastDS
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2335
- Gen Len: 19.0
- P: 0.5580
- R: 0.0884
- F1: 0.3129
- Score: 5.9585
- Bleu-precisions: [90.11303396628615, 80.34125695971072, 73.81487011728768, 69.48796722990271]
- Bleu-bp: 0.0763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:|
| 0.3166 | 1.0 | 4807 | 0.2335 | 19.0 | 0.5580 | 0.0884 | 0.3129 | 5.9585 | [90.11303396628615, 80.34125695971072, 73.81487011728768, 69.48796722990271] | 0.0763 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
yuvraj/xSumm | 6403af6f8f4eaf246bc94eef9d4ec1df88e2eca9 | 2020-12-11T22:05:01.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"transformers",
"summarization",
"extreme summarization",
"autotrain_compatible"
]
| summarization | false | yuvraj | null | yuvraj/xSumm | 15 | null | transformers | 9,582 | ---
language: "en"
tags:
- summarization
- extreme summarization
---
## Model description
BartForConditionalGenerationModel for extreme summarization- creates a one line abstractive summary of a given article
## How to use
PyTorch model available
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
tokenizer = AutoTokenizer.from_pretrained("yuvraj/xSumm")
model = AutoModelWithLMHead.from_pretrained("yuvraj/xSumm")
xsumm = pipeline('summarization', model=model, tokenizer=tokenizer)
xsumm("<text to be summarized>")
## Limitations and bias
Trained on a small fraction of the xsumm training dataset
|
zhiheng-huang/bert-large-uncased-whole-word-masking-embedding-relative-key-query | 8e5156c80b48db5fbe0868ca18d4e4e462a896b0 | 2021-05-20T09:48:50.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | zhiheng-huang | null | zhiheng-huang/bert-large-uncased-whole-word-masking-embedding-relative-key-query | 15 | null | transformers | 9,583 | Entry not found |
Davlan/xlm-roberta-base-masakhaner | 643ee144abafa9c5fbe5f71f25d8a0118b6344a3 | 2022-02-25T15:23:22.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"am",
"ha",
"ig",
"rw",
"lg",
"luo",
"pcm",
"sw",
"wo",
"yo",
"multilingual",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Davlan | null | Davlan/xlm-roberta-base-masakhaner | 15 | null | transformers | 9,584 | Hugging Face's logo
---
language:
- am
- ha
- ig
- rw
- lg
- luo
- pcm
- sw
- wo
- yo
- multilingual
datasets:
- masakhaner
---
# xlm-roberta-base-masakhaner
## Model description
**xlm-roberta-base-masakhaner** is the first **Named Entity Recognition** model for 10 African languages (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) based on a fine-tuned XLM-RoBERTa large model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/xlm-roberta-base-masakhaner")
model = AutoModelForTokenClassification.from_pretrained("Davlan/xlm-roberta-base-masakhaner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on 10 African NER datasets (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original MasakhaNER paper](https://arxiv.org/abs/2103.11811) which trained & evaluated the model on MasakhaNER corpus.
### BibTeX entry and citation info
```
@article{adelani21tacl,
title = {Masakha{NER}: Named Entity Recognition for African Languages},
author = {David Ifeoluwa Adelani and Jade Abbott and Graham Neubig and Daniel D'souza and Julia Kreutzer and Constantine Lignos and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and Israel Abebe Azime and Shamsuddeen Muhammad and Chris Chinenye Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and Jesujoba Alabi and Seid Muhie Yimam and Tajuddeen Gwadabe and Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and Verrah Otiende and Iroro Orife and Davis David and Samba Ngom and Tosin Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and Chiamaka Chukwuneke and Nkiruka Odu and Eric Peter Wairagala and Samuel Oyerinde and Clemencia Siro and Tobius Saul Bateesa and Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and Ayodele Awokoya and Mouhamadane MBOUP and Dibora Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and Thierno Ibrahima DIOP and Abdoulaye Diallo and Adewale Akinfaderin and Tendai Marengereke and Salomey Osei},
journal = {Transactions of the Association for Computational Linguistics (TACL)},
month = {},
url = {https://arxiv.org/abs/2103.11811},
year = {2021}
}
```
|
ghadeermobasher/BC5CDR-Chem2-imbalanced-BiomedNLP-PubMedBERT-base-uncased-abstract | a95c14a635a2938902dc6d864e6b1fb147e1faa9 | 2022-03-01T06:00:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chem2-imbalanced-BiomedNLP-PubMedBERT-base-uncased-abstract | 15 | null | transformers | 9,585 | Entry not found |
ghadeermobasher/Model_org | ec0a976a03a68831924a915e003e3cbe8eee4ee6 | 2022-03-01T21:25:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Model_org | 15 | null | transformers | 9,586 | Entry not found |
ghadeermobasher/Model_imb | e01715bfd053cf8e19121f5b09131f2e2394a1ee | 2022-03-01T21:26:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Model_imb | 15 | null | transformers | 9,587 | Entry not found |
ghadeermobasher/Model_imb_1 | 1a90a396146a91d18e13ceae7a50bb42962c0250 | 2022-03-02T04:10:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Model_imb_1 | 15 | null | transformers | 9,588 | Entry not found |
ghadeermobasher/Model_org_1 | 9925b3108d651f35452c969d7289aca4d75f6ab1 | 2022-03-02T04:16:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Model_org_1 | 15 | null | transformers | 9,589 | Entry not found |
ghadeermobasher/Model_imb_2 | 4fdc981bc5a81f8d794ab339886b2630f0d7b089 | 2022-03-02T11:34:17.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Model_imb_2 | 15 | null | transformers | 9,590 | Entry not found |
ghadeermobasher/Model_co_imb | d4dcc81191114c756cac56ba59b7babe3d471a85 | 2022-03-01T23:08:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Model_co_imb | 15 | null | transformers | 9,591 | Entry not found |
ActivationAI/distilbert-base-uncased-finetuned-emotion | dbf4470880ff3b73f22975241cd309bdf8e2195f | 2022-03-02T03:40:08.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | ActivationAI | null | ActivationAI/distilbert-base-uncased-finetuned-emotion | 15 | null | transformers | 9,592 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.928
- name: F1
type: f1
value: 0.9280065074208208
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2128
- Accuracy: 0.928
- F1: 0.9280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8151 | 1.0 | 250 | 0.3043 | 0.907 | 0.9035 |
| 0.24 | 2.0 | 500 | 0.2128 | 0.928 | 0.9280 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ghadeermobasher/BC4-original-PubmedBert | 96d1e5c0b00169c4644306ddfd91da3dbe509f69 | 2022-03-03T02:53:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4-original-PubmedBert | 15 | null | transformers | 9,593 | Entry not found |
ghadeermobasher/BC4-original-PubmedBert_small | f72da94ac4f4208739edfaca46482bd43873f66f | 2022-03-02T11:07:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4-original-PubmedBert_small | 15 | null | transformers | 9,594 | Entry not found |
ghadeermobasher/BC4-modified-PubmedBert_small | c7ffad75406d8918820a2a813bca1e9e6a013c60 | 2022-03-02T11:07:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4-modified-PubmedBert_small | 15 | null | transformers | 9,595 | Entry not found |
ivanlau/distil-bert-uncased-finetuned-github-issues | 0d35383caff319649c8996504a0f8b5b0a33dea4 | 2022-03-04T10:16:47.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:ticket tagger",
"transformers",
"model-index"
]
| text-classification | false | ivanlau | null | ivanlau/distil-bert-uncased-finetuned-github-issues | 15 | null | transformers | 9,596 | ---
datasets:
- ticket tagger
metrics:
- accuracy
model-index:
- name: distil-bert-uncased-finetuned-github-issues
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: ticket tagger
type: ticket tagger
args: full
metrics:
- name: Accuracy
type: accuracy
value: 0.7862
---
# Model Description
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) and fine-tuning it on the
[github ticket tagger dataset](https://tickettagger.blob.core.windows.net/datasets/dataset-labels-top3-30k-real.txt). It classifies issue into 3 common categories: Bug, Enhancement, Questions.
It achieves the following results on the evaluation set:
- Accuracy: 0.7862
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-5
- train_batch_size: 16
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0
- num_epochs: 5
### Codes
https://github.com/IvanLauLinTiong/IntelliLabel |
l3cube-pune/marathi-albert-v2 | 76ef3b6421baf9bb747e102310594550f4627587 | 2022-06-26T15:13:43.000Z | [
"pytorch",
"albert",
"fill-mask",
"mr",
"dataset:L3Cube-MahaCorpus",
"arxiv:2202.01159",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
]
| fill-mask | false | l3cube-pune | null | l3cube-pune/marathi-albert-v2 | 15 | 1 | transformers | 9,597 | ---
license: cc-by-4.0
language: mr
datasets:
- L3Cube-MahaCorpus
---
## MahaAlBERT
MahaAlBERT is a Marathi AlBERT model trained on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2202.01159)
```
@InProceedings{joshi:2022:WILDRE6,
author = {Joshi, Raviraj},
title = {L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT Language Models, and Resources},
booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {97--101}
}
``` |
timothyshi/bart-large-cnn-finetuned-booksum-chapter | 24ef62670565e5ca800a0c4365d7db48bea3f494 | 2022-03-07T05:13:01.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | timothyshi | null | timothyshi/bart-large-cnn-finetuned-booksum-chapter | 15 | 1 | transformers | 9,598 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-booksum-chapter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-booksum-chapter
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1373
- Rouge1: 18.1222
- Rouge2: 3.5783
- Rougel: 13.4084
- Rougelsum: 13.5832
- Gen Len: 63.5121
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.5297 | 1.0 | 23094 | 3.1373 | 18.1222 | 3.5783 | 13.4084 | 13.5832 | 63.5121 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
datarpit/toy | 9ffd6ea55f01c2da71bfd7f7a3c6c5a3f5472cdb | 2022-03-10T00:06:22.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | datarpit | null | datarpit/toy | 15 | null | transformers | 9,599 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: toy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# toy
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4798 | 1.0 | 231 | 0.2252 |
| 0.3378 | 2.0 | 462 | 0.1777 |
| 0.1024 | 3.0 | 693 | 0.1586 |
| 0.0736 | 4.0 | 924 | 0.1664 |
| 0.1237 | 5.0 | 1155 | 0.1692 |
| 0.1049 | 6.0 | 1386 | 0.1818 |
| 0.0239 | 7.0 | 1617 | 0.2127 |
| 0.0036 | 8.0 | 1848 | 0.1888 |
| 0.0051 | 9.0 | 2079 | 0.2061 |
| 0.0003 | 10.0 | 2310 | 0.1905 |
| 0.0005 | 11.0 | 2541 | 0.2011 |
| 0.0003 | 12.0 | 2772 | 0.1928 |
| 0.0029 | 13.0 | 3003 | 0.2563 |
| 0.0002 | 14.0 | 3234 | 0.2076 |
| 0.0002 | 15.0 | 3465 | 0.1980 |
| 0.0001 | 16.0 | 3696 | 0.2013 |
| 0.0001 | 17.0 | 3927 | 0.2089 |
| 0.0001 | 18.0 | 4158 | 0.1984 |
| 0.0001 | 19.0 | 4389 | 0.2017 |
| 0.0001 | 20.0 | 4620 | 0.2013 |
| 0.0001 | 21.0 | 4851 | 0.2142 |
| 0.0001 | 22.0 | 5082 | 0.1943 |
| 0.0001 | 23.0 | 5313 | 0.2003 |
| 0.0 | 24.0 | 5544 | 0.2015 |
| 0.0001 | 25.0 | 5775 | 0.2031 |
| 0.0002 | 26.0 | 6006 | 0.2600 |
| 0.0022 | 27.0 | 6237 | 0.2269 |
| 0.0 | 28.0 | 6468 | 0.2125 |
| 0.0 | 29.0 | 6699 | 0.2172 |
| 0.0 | 30.0 | 6930 | 0.2185 |
| 0.0 | 31.0 | 7161 | 0.2004 |
| 0.0 | 32.0 | 7392 | 0.2077 |
| 0.0 | 33.0 | 7623 | 0.2333 |
| 0.0003 | 34.0 | 7854 | 0.2102 |
| 0.0 | 35.0 | 8085 | 0.2095 |
| 0.0 | 36.0 | 8316 | 0.2030 |
| 0.0 | 37.0 | 8547 | 0.2038 |
| 0.0 | 38.0 | 8778 | 0.2062 |
| 0.0 | 39.0 | 9009 | 0.2080 |
| 0.0 | 40.0 | 9240 | 0.2083 |
| 0.0 | 41.0 | 9471 | 0.2063 |
| 0.0 | 42.0 | 9702 | 0.2146 |
| 0.0 | 43.0 | 9933 | 0.2168 |
| 0.0 | 44.0 | 10164 | 0.2112 |
| 0.0 | 45.0 | 10395 | 0.2109 |
| 0.0 | 46.0 | 10626 | 0.2116 |
| 0.0 | 47.0 | 10857 | 0.2122 |
| 0.0 | 48.0 | 11088 | 0.2122 |
| 0.0 | 49.0 | 11319 | 0.2124 |
| 0.0 | 50.0 | 11550 | 0.2124 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.