modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
clisi2000/distilbert-base-uncased-finetuned-clinc | 51bb574b837b4c9abe9995c92f6b7267ba6f2f33 | 2022-03-25T06:23:40.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | clisi2000 | null | clisi2000/distilbert-base-uncased-finetuned-clinc | 7 | null | transformers | 14,300 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9158064516129032
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7796
- Accuracy: 0.9158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2883 | 1.0 | 318 | 3.2778 | 0.7390 |
| 2.6185 | 2.0 | 636 | 1.8740 | 0.8232 |
| 1.5423 | 3.0 | 954 | 1.1579 | 0.8890 |
| 1.0131 | 4.0 | 1272 | 0.8629 | 0.9077 |
| 0.7964 | 5.0 | 1590 | 0.7796 | 0.9158 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.2+cpu
- Datasets 1.18.4
- Tokenizers 0.10.3
|
sanchit-gandhi/wav2vec2-2-rnd-regularisation | 12b2319967b12a4540b5bd05a40af34b85f2134d | 2022-03-26T06:45:45.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-rnd-regularisation | 7 | null | transformers | 14,301 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6977
- Wer: 0.1231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.1467 | 1.68 | 1500 | 6.0558 | 1.3243 |
| 5.4388 | 3.36 | 3000 | 5.4711 | 1.5604 |
| 3.3434 | 5.04 | 4500 | 3.4808 | 0.7461 |
| 1.5259 | 6.73 | 6000 | 2.1931 | 0.3430 |
| 1.4285 | 8.41 | 7500 | 1.5883 | 0.2784 |
| 1.0687 | 10.09 | 9000 | 1.2481 | 0.2069 |
| 0.6425 | 11.77 | 10500 | 1.0507 | 0.1758 |
| 0.7147 | 13.45 | 12000 | 0.9397 | 0.1584 |
| 0.5083 | 15.13 | 13500 | 0.8452 | 0.1453 |
| 0.4287 | 16.82 | 15000 | 0.7915 | 0.1388 |
| 0.3499 | 18.5 | 16500 | 0.7477 | 0.1315 |
| 0.3733 | 20.18 | 18000 | 0.7307 | 0.1287 |
| 0.2609 | 21.86 | 19500 | 0.7061 | 0.1263 |
| 0.2602 | 23.54 | 21000 | 0.6977 | 0.1231 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ScandinavianMrT/gpt2_ONION_prefinetune_3.0 | a2168168102ccbb4ac61ec2252476808bc4b64ae | 2022-03-23T15:54:46.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | ScandinavianMrT | null | ScandinavianMrT/gpt2_ONION_prefinetune_3.0 | 7 | null | transformers | 14,302 | Entry not found |
shahrukhx01/gbert-hasoc-german-2019 | f183c4cd25c5e6a0d89f0550b2fe7c15b03d4975 | 2022-03-23T18:18:56.000Z | [
"pytorch",
"bert",
"text-classification",
"de",
"transformers",
"hate-speech-classification"
]
| text-classification | false | shahrukhx01 | null | shahrukhx01/gbert-hasoc-german-2019 | 7 | null | transformers | 14,303 | ---
language: "de"
tags:
- hate-speech-classification
widget:
- text: "Das ist der absolute Gipfel! Lächerliche 2,5 Jahre Haft für einen extremst sadistischen Mord. Ich fasse es nicht. Das sitzt der Killer auf der linken Arschbacke ab und lacht sich dabei kaputt. Unsere Justiz ist nur noch zum Kotzen."
- text: "Das ist der absolute Gipfel! Lächerliche 2,5 Jahre Haft für einen extremst sadistischen Mord. Ich fasse es nicht. Das sitzt der Killer auf der linken Arschbacke ab und lacht sich dabei kaputt. Unsere Justiz ist nur noch zum Kotzen."
---
# Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("shahrukhx01/gbert-hasoc-german-2019")
model = AutoModelForSequenceClassification.from_pretrained("shahrukhx01/gbert-hasoc-german-2019")
```
# Dataset
```bibtext
@inproceedings{10.1145/3368567.3368584,
author = {Mandl, Thomas and Modha, Sandip and Majumder, Prasenjit and Patel, Daksh and Dave, Mohana and Mandlia, Chintak and Patel, Aditya},
title = {Overview of the HASOC Track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages},
year = {2019},
isbn = {9781450377508},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3368567.3368584},
doi = {10.1145/3368567.3368584},
abstract = {The identification of Hate Speech in Social Media is of great importance and receives much attention in the text classification community. There is a huge demand for research for languages other than English. The HASOC track intends to stimulate development in Hate Speech for Hindi, German and English. Three datasets were developed from Twitter and Facebook and made available. Binary classification and more fine-grained subclasses were offered in 3 subtasks. For all subtasks, 321 experiments were submitted. The approaches used most often were LSTM networks processing word embedding input. The performance of the best system for identification of Hate Speech for English, Hindi, and German was a Marco-F1 score of 0.78, 0.81 and 0.61, respectively.},
booktitle = {Proceedings of the 11th Forum for Information Retrieval Evaluation},
pages = {14–17},
numpages = {4},
keywords = {Text Classification, Hate Speech, Evaluation, Deep Learning},
location = {Kolkata, India},
series = {FIRE '19}
}
```
---
license: mit
---
|
tiennvcs/distilbert-base-uncased-finetuned-ner | d3b6e52068d367d457a2040e8a992e9ab9a142cc | 2022-03-24T07:29:26.000Z | [
"pytorch",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tiennvcs | null | tiennvcs/distilbert-base-uncased-finetuned-ner | 7 | null | transformers | 14,304 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9264836138175376
- name: Recall
type: recall
value: 0.9361226087929299
- name: F1
type: f1
value: 0.9312781703856213
- name: Accuracy
type: accuracy
value: 0.9836529143565221
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0616
- Precision: 0.9265
- Recall: 0.9361
- F1: 0.9313
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2437 | 1.0 | 878 | 0.0745 | 0.9144 | 0.9173 | 0.9158 | 0.9799 |
| 0.0518 | 2.0 | 1756 | 0.0621 | 0.9177 | 0.9353 | 0.9264 | 0.9826 |
| 0.03 | 3.0 | 2634 | 0.0616 | 0.9265 | 0.9361 | 0.9313 | 0.9837 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ScandinavianMrT/gpt2_prefinetune_SARC_2.0 | 348938280764c18065b8b55b1e4d54defdf6417a | 2022-03-28T08:36:28.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | ScandinavianMrT | null | ScandinavianMrT/gpt2_prefinetune_SARC_2.0 | 7 | null | transformers | 14,305 | Entry not found |
Flag/joebiden | 713e0686015661fdaa02a16631f0cd75375e63a9 | 2022-03-25T22:10:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | Flag | null | Flag/joebiden | 7 | null | transformers | 14,306 | Entry not found |
SergeyKamenshchikov/nsi_tuned | d47d412b02ef7dd44fb70776a62ab9614a219393 | 2022-03-27T13:58:13.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | SergeyKamenshchikov | null | SergeyKamenshchikov/nsi_tuned | 7 | null | transformers | 14,307 | Entry not found |
mrm8488/t5-base-iterater | 309fe4132155899ee2a228bddfbcd1e645555114 | 2022-03-28T11:00:41.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"en",
"dataset:wanyu/IteraTeR_full_sent",
"transformers",
"generated_from_trainer",
"IteraTeR",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | mrm8488 | null | mrm8488/t5-base-iterater | 7 | 1 | transformers | 14,308 | ---
license: apache-2.0
language:
- en
datasets:
- wanyu/IteraTeR_full_sent
tags:
- generated_from_trainer
- IteraTeR
widget:
- text: "<clarity> Delay-based schemes have the potential to resolve this last packet problem by scheduling the link based on the delay for the packet has encountered."
model-index:
- name: t5-base-iterater
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5 (base) fine-tuned on IteraTeR
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an [IteraTeR](https://huggingface.co/datasets/wanyu/IteraTeR_full_sent) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.3286 | 0.09 | 2000 | 0.3010 |
| 0.3194 | 0.18 | 4000 | 0.2872 |
| 0.3208 | 0.27 | 6000 | 0.2792 |
| 0.3091 | 0.36 | 8000 | 0.2731 |
| 0.3164 | 0.45 | 10000 | 0.2678 |
| 0.2941 | 0.54 | 12000 | 0.2682 |
| 0.2981 | 0.63 | 14000 | 0.2696 |
| 0.2975 | 0.72 | 16000 | 0.2643 |
| 0.3109 | 0.81 | 18000 | 0.2624 |
| 0.2965 | 0.9 | 20000 | 0.2648 |
| 0.3053 | 0.99 | 22000 | 0.2627 |
| 0.2779 | 1.08 | 24000 | 0.2632 |
| 0.2692 | 1.17 | 26000 | 0.2608 |
| 0.2755 | 1.26 | 28000 | 0.2600 |
| 0.2771 | 1.35 | 30000 | 0.2584 |
| 0.2774 | 1.44 | 32000 | 0.2609 |
| 0.2976 | 1.53 | 34000 | 0.2593 |
| 0.2646 | 1.62 | 36000 | 0.2616 |
| 0.2705 | 1.71 | 38000 | 0.2574 |
| 0.2714 | 1.8 | 40000 | 0.2577 |
| 0.2857 | 1.9 | 42000 | 0.2576 |
| 0.2832 | 1.99 | 44000 | 0.2580 |
### How to use
```py
from transformers import T5ForConditionalGeneration, T5TokenizerFast
MODEL_CKPT = 'mrm8488/t5-base-iterater'
tokenizer = T5TokenizerFast.from_pretrained(MODEL_CKPT)
model = T5ForConditionalGeneration.from_pretrained(MODEL_CKPT)
def predict(intent, text):
input_text = f"<{intent}> {text}"
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'], max_length=128, num_beams=8)
return tokenizer.decode(output[0], skip_special_tokens=True)
text = "Delay-based schemes have the potential to resolve this last packet problem by scheduling the link based on the delay for the packet has encountered."
intent = "clarity"
predict(intent, text)
# Delay-based schemes have the potential to resolve this last packet problem by scheduling the link based on the delay the packet has encountered.
```
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
hackathon-pln-es/es_text_neutralizer | 6b9055aad684a42b14248badd06ad9d2ec7603aa | 2022-04-01T12:38:43.000Z | [
"pytorch",
"t5",
"text2text-generation",
"es",
"dataset:hackathon-pln-es/neutral-es",
"transformers",
"Text2Text Generation",
"Inclusive Language",
"Text Neutralization",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | hackathon-pln-es | null | hackathon-pln-es/es_text_neutralizer | 7 | 5 | transformers | 14,309 | ---
language:
- es
license: apache-2.0
tags:
- Text2Text Generation
- Inclusive Language
- Text Neutralization
- pytorch
datasets:
- hackathon-pln-es/neutral-es
metrics:
- sacrebleu
model-index:
- name: es_text_neutralizer
results:
- task:
type: Text2Text Generation
name: Neutralization of texts in Spanish
dataset:
type: hackathon-pln-es/neutral-es
name: neutral-es
metrics:
- type: sacrebleu
value: 0.96
name: sacrebleu # Optional. Example: Test WER
- type: bertscore # Required. Example: wer
value: 0.98
name: BertScoreF1 # Optional. Example: Test WER
- type: DiffBleu # Required. Example: wer
value: 0.35
name: DiffBleu # Optional. Example: Test WER
---
## Model objective
Spanish is a beautiful language and it has many ways of referring to people, neutralizing the genders and using some of the resources inside the language. One would say *Todas las personas asistentes* instead of *Todos los asistentes* and it would end in a more inclusive way for talking about people. The purpose of this collaboratively trained model is to create a solution that reinforces the UN objective of the gender equality.
Given any input, our model will generate a gender neutral sentence, correcting any non-inclusive expressions or words.
It's a straightforward and fast solution that creates a positive impact in the contemporary social panorama.
<p align="center">
<img src="https://upload.wikimedia.org/wikipedia/commons/2/29/Gender_equality_symbol_%28clipart%29.png" width="250"/>
</p>
By using gender inclusive models we can help reducing gender bias in a language corpus by, for instance, adding data augmentation and creating different examples
## Training and evaluation data
The data used for the model training has been created form a compilation of sources, obtained from a series of guidelines and manuals issued by Spanish Ministry of Health, Social Services and Equality in the matter of the usage of non-sexist language, stipulated in this linked [document:](https://www.inmujeres.gob.es/servRecursos/formacion/GuiasLengNoSexista/docs/Guiaslenguajenosexista_.pdf):
### Compiled sources
[Guía para un discurso igualitario en la universidad de alicante](https://ieg.ua.es/es/documentos/normativasobreigualdad/guia-para-un-discurso-igualitario-en-la-ua.pdf)
[Guía UC de Comunicación en Igualdad](<https://web.unican.es/unidades/igualdad/SiteAssets/igualdad/comunicacion-en-igualdad/guia%20comunicacion%20igualdad%20(web).pdf>)
[Buenas prácticas para el tratamiento del lenguaje en igualdad](https://e-archivo.uc3m.es/handle/10016/22811)
[Guía del lenguaje no sexista de la Universidad de Castilla-La Mancha](https://unidadigualdad.ugr.es/page/guiialenguajeuniversitarionosexista_universidaddecastillalamancha/!)
[Guía de Lenguaje Para el Ámbito Educativo](https://www.educacionyfp.gob.es/va/dam/jcr:8ce318fd-c8ff-4ad2-97b4-7318c27d1682/guialenguajeambitoeducativo.pdf)
[Guía para un uso igualitario y no sexista del lenguaje y dela imagen en la Universidad de Jaén](https://www.ujaen.es/servicios/uigualdad/sites/servicio_uigualdad/files/uploads/Guia_lenguaje_no_sexista.pdf)
[Guía de uso no sexista del vocabulario español](https://www.um.es/documents/2187255/2187763/guia-leng-no-sexista.pdf/d5b22eb9-b2e4-4f4b-82aa-8a129cdc83e3)
[Guía para el uso no sexista de la lengua castellana y de imágnes en la UPV/EHV](https://www.ehu.eus/documents/1734204/1884196/Guia_uso_no_sexista_EHU.pdf)
[Guía de lenguaje no sexista UNED](http://portal.uned.es/pls/portal/docs/PAGE/UNED_MAIN/LAUNIVERSIDAD/VICERRECTORADOS/GERENCIA/OFICINA_IGUALDAD/CONCEPTOS%20BASICOS/GUIA_LENGUAJE.PDF)
[COMUNICACIÓN AMBIENTAL CON PERSPECTIVA DE GÉNERO](https://cima.cantabria.es/documents/5710649/5729124/COMUNICACI%C3%93N+AMBIENTAL+CON+PERSPECTIVA+DE+G%C3%89NERO.pdf/ccc18730-53e3-35b9-731e-b4c43339254b)
[Recomendaciones para la utilización de lenguaje no sexista](https://www.csic.es/sites/default/files/guia_para_un_uso_no_sexista_de_la_lengua_adoptada_por_csic2.pdf)
[Estudio sobre lenguaje y contenido sexista en la Web](https://www.mujeresenred.net/IMG/pdf/Estudio_paginas_web_T-incluye_ok.pdf)
[Nombra.en.red. En femenino y en masculino](https://www.inmujeres.gob.es/areasTematicas/educacion/publicaciones/serieLenguaje/docs/Nombra_en_red.pdf)
## Model specs
This model is a fine-tuned version of [spanish-t5-small](https://huggingface.co/flax-community/spanish-t5-small) on the data described below.
It achieves the following results on the evaluation set:
- 'eval_bleu': 93.8347,
- 'eval_f1': 0.9904,
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-04
- train_batch_size: 32
- seed: 42
- num_epochs: 10
- weight_decay: 0,01
## Metrics
For training, we used both Blue (sacrebleu implementation in HF) and BertScore. The first one, a standard in Machine Translation processes, has been added for ensuring robustness of the newly generated data, while the second one is kept for keeping the expected semantic similarity.
However, given the actual use case, we expect generated segments to be very close to input segments and to label segments in training. As an example, we can take the following:
inputSegment = 'De acuerdo con las informaciones anteriores , las alumnas se han quejado de la actitud de los profesores en los exámenes finales. Los representantes estudiantiles son los alumnos Juanju y Javi.'
expectedOutput (label) = 'De acuerdo con las informaciones anteriores, el alumnado se ha quejado de la actitud del profesorado en los exámenes finales. Los representantes estudiantiles son los alumnos Juanju y Javi.'
actualOutput = 'De acuerdo con las informaciones anteriores, el alumnado se ha quejado de la actitud del profesorado en los exámenes finales. Los representantes estudiantiles son el alumnado Juanju y Javi.'
As you can see, segments are pretty similar. So, instead of measuring Bleu or BertScore here, we propose an alternate metric that would be DiffBleu:
$$DiffBleu = BLEU(actualOutput - inputSegment, labels - inputSegment)$$
Where the minuses as in set notation. This way, we also evaluate DiffBleu after the model has been trained.
## Team Members
- Fernando Velasco [(fermaat)](https://huggingface.co/fermaat)
- Cibeles Redondo [(CibelesR)](https://huggingface.co/CibelesR)
- Juan Julian Cea [(Juanju)](https://huggingface.co/Juanju)
- Magdalena Kujalowicz [(MacadellaCosta)](https://huggingface.co/MacadellaCosta)
- Javier Blasco [(javiblasco)](https://huggingface.co/javiblasco)
Enjoy! |
DrishtiSharma/wav2vec2-base-finetuned-sentiment-mesd-v9 | 4c98d2b04b8fe342151f614800e7abc4fb63304c | 2022-03-29T00:52:52.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| audio-classification | false | DrishtiSharma | null | DrishtiSharma/wav2vec2-base-finetuned-sentiment-mesd-v9 | 7 | null | transformers | 14,310 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-sentiment-mesd-v9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-sentiment-mesd-v9
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3500
- Accuracy: 0.9154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 40
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.86 | 3 | 1.7825 | 0.1846 |
| 1.9553 | 1.86 | 6 | 1.7212 | 0.4308 |
| 1.9553 | 2.86 | 9 | 1.6164 | 0.3769 |
| 2.002 | 3.86 | 12 | 1.4904 | 0.3769 |
| 1.6191 | 4.86 | 15 | 1.4426 | 0.4385 |
| 1.6191 | 5.86 | 18 | 1.3516 | 0.5231 |
| 1.6209 | 6.86 | 21 | 1.2176 | 0.5538 |
| 1.6209 | 7.86 | 24 | 1.1683 | 0.5692 |
| 1.371 | 8.86 | 27 | 1.0885 | 0.5923 |
| 1.1568 | 9.86 | 30 | 1.0152 | 0.6385 |
| 1.1568 | 10.86 | 33 | 0.9289 | 0.6385 |
| 1.1023 | 11.86 | 36 | 0.9141 | 0.6308 |
| 1.1023 | 12.86 | 39 | 0.8526 | 0.6462 |
| 0.9448 | 13.86 | 42 | 0.8420 | 0.6769 |
| 0.7972 | 14.86 | 45 | 0.7976 | 0.6692 |
| 0.7972 | 15.86 | 48 | 0.8192 | 0.7308 |
| 0.7793 | 16.86 | 51 | 0.7108 | 0.7615 |
| 0.7793 | 17.86 | 54 | 0.6712 | 0.7769 |
| 0.6468 | 18.86 | 57 | 0.6684 | 0.7923 |
| 0.5083 | 19.86 | 60 | 0.6922 | 0.7385 |
| 0.5083 | 20.86 | 63 | 0.6148 | 0.7923 |
| 0.4988 | 21.86 | 66 | 0.5846 | 0.7923 |
| 0.4988 | 22.86 | 69 | 0.6050 | 0.8154 |
| 0.4123 | 23.86 | 72 | 0.5506 | 0.7846 |
| 0.3511 | 24.86 | 75 | 0.6095 | 0.7846 |
| 0.3511 | 25.86 | 78 | 0.5916 | 0.8154 |
| 0.3268 | 26.86 | 81 | 0.5912 | 0.8077 |
| 0.3268 | 27.86 | 84 | 0.5142 | 0.8538 |
| 0.3036 | 28.86 | 87 | 0.5492 | 0.8077 |
| 0.3066 | 29.86 | 90 | 0.6007 | 0.8231 |
| 0.3066 | 30.86 | 93 | 0.5748 | 0.8231 |
| 0.2538 | 31.86 | 96 | 0.6027 | 0.7692 |
| 0.2538 | 32.86 | 99 | 0.6979 | 0.7462 |
| 0.2281 | 33.86 | 102 | 0.7002 | 0.7615 |
| 0.2183 | 34.86 | 105 | 0.6650 | 0.7769 |
| 0.2183 | 35.86 | 108 | 0.5192 | 0.8462 |
| 0.2202 | 36.86 | 111 | 0.5389 | 0.8308 |
| 0.2202 | 37.86 | 114 | 0.5050 | 0.8385 |
| 0.1906 | 38.86 | 117 | 0.5722 | 0.7769 |
| 0.154 | 39.86 | 120 | 0.5239 | 0.8308 |
| 0.154 | 40.86 | 123 | 0.4448 | 0.8615 |
| 0.1474 | 41.86 | 126 | 0.4623 | 0.8615 |
| 0.1474 | 42.86 | 129 | 0.4282 | 0.8615 |
| 0.1345 | 43.86 | 132 | 0.5087 | 0.8615 |
| 0.1567 | 44.86 | 135 | 0.4859 | 0.8385 |
| 0.1567 | 45.86 | 138 | 0.6603 | 0.8077 |
| 0.1731 | 46.86 | 141 | 0.5379 | 0.8385 |
| 0.1731 | 47.86 | 144 | 0.8666 | 0.7538 |
| 0.1606 | 48.86 | 147 | 0.7518 | 0.8 |
| 0.1484 | 49.86 | 150 | 0.5986 | 0.8385 |
| 0.1484 | 50.86 | 153 | 0.6368 | 0.8231 |
| 0.2256 | 51.86 | 156 | 0.4639 | 0.8692 |
| 0.2256 | 52.86 | 159 | 0.5533 | 0.8462 |
| 0.1178 | 53.86 | 162 | 0.5038 | 0.8615 |
| 0.0815 | 54.86 | 165 | 0.5052 | 0.8692 |
| 0.0815 | 55.86 | 168 | 0.4337 | 0.8846 |
| 0.0998 | 56.86 | 171 | 0.4422 | 0.8769 |
| 0.0998 | 57.86 | 174 | 0.4317 | 0.8692 |
| 0.0855 | 58.86 | 177 | 0.4025 | 0.8923 |
| 0.0962 | 59.86 | 180 | 0.4605 | 0.8769 |
| 0.0962 | 60.86 | 183 | 0.4356 | 0.8769 |
| 0.0763 | 61.86 | 186 | 0.4614 | 0.8769 |
| 0.0763 | 62.86 | 189 | 0.4382 | 0.8846 |
| 0.0902 | 63.86 | 192 | 0.4701 | 0.8692 |
| 0.0654 | 64.86 | 195 | 0.4922 | 0.8692 |
| 0.0654 | 65.86 | 198 | 0.5413 | 0.8538 |
| 0.0651 | 66.86 | 201 | 0.5759 | 0.8615 |
| 0.0651 | 67.86 | 204 | 0.4238 | 0.9 |
| 0.0822 | 68.86 | 207 | 0.3500 | 0.9154 |
| 0.0625 | 69.86 | 210 | 0.3878 | 0.8923 |
| 0.0625 | 70.86 | 213 | 0.4952 | 0.8615 |
| 0.0548 | 71.86 | 216 | 0.4544 | 0.8615 |
| 0.0548 | 72.86 | 219 | 0.5497 | 0.8769 |
| 0.054 | 73.86 | 222 | 0.4434 | 0.8846 |
| 0.0543 | 74.86 | 225 | 0.4732 | 0.8769 |
| 0.0543 | 75.86 | 228 | 0.4425 | 0.8923 |
| 0.0881 | 76.86 | 231 | 0.4788 | 0.8769 |
| 0.0881 | 77.86 | 234 | 0.5448 | 0.8769 |
| 0.061 | 78.86 | 237 | 0.4221 | 0.9077 |
| 0.0567 | 79.86 | 240 | 0.4404 | 0.8769 |
| 0.0567 | 80.86 | 243 | 0.4099 | 0.9 |
| 0.052 | 81.86 | 246 | 0.5259 | 0.8769 |
| 0.052 | 82.86 | 249 | 0.5874 | 0.8692 |
| 0.0444 | 83.86 | 252 | 0.5555 | 0.8846 |
| 0.0332 | 84.86 | 255 | 0.5156 | 0.8615 |
| 0.0332 | 85.86 | 258 | 0.4564 | 0.8615 |
| 0.0449 | 86.86 | 261 | 0.4826 | 0.8692 |
| 0.0449 | 87.86 | 264 | 0.4726 | 0.8615 |
| 0.0385 | 88.86 | 267 | 0.4206 | 0.8846 |
| 0.0356 | 89.86 | 270 | 0.4050 | 0.8769 |
| 0.0356 | 90.86 | 273 | 0.4161 | 0.8923 |
| 0.0391 | 91.86 | 276 | 0.4100 | 0.9077 |
| 0.0391 | 92.86 | 279 | 0.4047 | 0.9 |
| 0.0249 | 93.86 | 282 | 0.4044 | 0.9 |
| 0.0399 | 94.86 | 285 | 0.3968 | 0.8846 |
| 0.0399 | 95.86 | 288 | 0.3802 | 0.9 |
| 0.031 | 96.86 | 291 | 0.3689 | 0.9 |
| 0.031 | 97.86 | 294 | 0.3616 | 0.9077 |
| 0.036 | 98.86 | 297 | 0.3584 | 0.9077 |
| 0.0386 | 99.86 | 300 | 0.3574 | 0.9077 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Wende/bert-finetuned-ner-accelerate | dc42358d1015ece64958fdf3af450fffbad0022d | 2022-03-29T12:25:52.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Wende | null | Wende/bert-finetuned-ner-accelerate | 7 | null | transformers | 14,311 | Entry not found |
anjandash/JavaBERT-small | af38e1dbef6b22553e0898e0114cc0548f183f33 | 2022-03-30T11:52:00.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"java",
"dataset:anjandash/java-8m-methods-v1",
"transformers",
"license:mit"
]
| text-classification | false | anjandash | null | anjandash/JavaBERT-small | 7 | null | transformers | 14,312 | ---
language:
- java
license: mit
datasets:
- anjandash/java-8m-methods-v1
--- |
Finnish-NLP/t5-mini-nl8-finnish | 858db87c24cfec96771b3a3beeea9348c07deee4 | 2022-07-12T13:14:12.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:1910.10683",
"arxiv:2002.05202",
"arxiv:2109.10686",
"transformers",
"finnish",
"t5x",
"seq2seq",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | Finnish-NLP | null | Finnish-NLP/t5-mini-nl8-finnish | 7 | null | transformers | 14,313 | ---
language:
- fi
license: apache-2.0
tags:
- finnish
- t5
- t5x
- seq2seq
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
inference: false
---
# T5-mini-nl8 for Finnish
Pretrained T5 model on Finnish language using a span-based masked language modeling (MLM) objective. T5 was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
Finnish T5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
More precisely, it was pretrained with the span-based masked language modeling (MLM) objective. Spans of the input sequence are masked by so-called sentinel tokens (a.k.a unique mask tokens) and the output sequence is formed as a concatenation of the same sentinel tokens and the real masked tokens. This way, the model learns an inner representation of the Finnish language.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning
- Pretrained on span-based masked language modeling (MLM) objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially.
This model uses the [t5-efficient-mini-nl8](https://huggingface.co/google/t5-efficient-mini-nl8) architecture's layer depth which means both the encoder and the decoder have 8 transformer layers compared to the original T5 "mini" model's architecture of 4 transformer layers.
In total, this model has 72 million parameters.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-mini-nl8-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-mini-nl8-finnish")
```
and in TensorFlow:
```python
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-mini-nl8-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-mini-nl8-finnish", from_pt=True)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish T5 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 32000. The inputs and the outputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 500K steps with a batch size of 256 (in total 66B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2, and then an inverse square root decay (exponential decay) of the learning rate after.
Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere).
## Evaluation results
Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens.
When fine-tuned on those datasets, this model (the second row of the table) achieves the following accuracy results compared to our other T5 models and their parameter counts:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 |
|Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 |
|Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 |
|Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 |
|Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** |
|Finnish-NLP/t5-large-nl36-finnish | 1425 million |TBA |TBA |
Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|google/mt5-small | 301 million |91.51 |64.10 |
|google/mt5-base | 583 million |92.71 |68.40 |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
huggingtweets/youtube | 5c0ce7c76c62a6fbcc59278d2d4d714bc0fc1570 | 2022-03-31T14:06:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/youtube | 7 | null | transformers | 14,314 | ---
language: en
thumbnail: http://www.huggingtweets.com/youtube/1648735587597/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1427292844612595720/RC1YSvuT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">YouTube</div>
<div style="text-align: center; font-size: 14px;">@youtube</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from YouTube.
| Data | YouTube |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 23 |
| Short tweets | 104 |
| Tweets kept | 3123 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2dx34obn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @youtube's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/p527w5q3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/p527w5q3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/youtube')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
blacktree/distilbert-base-uncased-finetuned-cola | e7f06da23210b070466e7d2b9cfca769b903c9fe | 2022-04-01T09:00:33.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | blacktree | null | blacktree/distilbert-base-uncased-finetuned-cola | 7 | null | transformers | 14,315 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5285676961321106
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4883
- Matthews Correlation: 0.5286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5269 | 1.0 | 535 | 0.5197 | 0.4187 |
| 0.3477 | 2.0 | 1070 | 0.4883 | 0.5286 |
| 0.2333 | 3.0 | 1605 | 0.6530 | 0.5079 |
| 0.17 | 4.0 | 2140 | 0.7567 | 0.5272 |
| 0.1271 | 5.0 | 2675 | 0.8887 | 0.5259 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.0
|
plasticfruits/gpt2-finetuned-how-to-qa | e01e38294f2234582eb52a4b00c6c6598bf99121 | 2022-05-03T15:32:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"license:mit"
]
| text-generation | false | plasticfruits | null | plasticfruits/gpt2-finetuned-how-to-qa | 7 | null | transformers | 14,316 | ---
language: en
license: mit
---
# HowTo QA with GPT-2 base
GPT-2 English language model fine-tuned with ±2.000 entries from WikiHow.
You can try it here: https://how-to-generator.herokuapp.com/
Input prompt should follow the following format:
`\n<|startoftext|>[WP] How to {text} \n[RESPONSE]`
Example:
`\n<|startoftext|>[WP] How to create a universe \n[RESPONSE]`
|
vicl/canine-s-finetuned-stsb | 41640b64739856165ea13e65c4a2aed13fdd6109 | 2022-04-01T23:25:04.000Z | [
"pytorch",
"tensorboard",
"canine",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | vicl | null | vicl/canine-s-finetuned-stsb | 7 | null | transformers | 14,317 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: canine-s-finetuned-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8397182061195433
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine-s-finetuned-stsb
This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7223
- Pearson: 0.8397
- Spearmanr: 0.8397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| No log | 1.0 | 360 | 0.7938 | 0.8083 | 0.8077 |
| 1.278 | 2.0 | 720 | 0.7349 | 0.8322 | 0.8305 |
| 0.6765 | 3.0 | 1080 | 0.7075 | 0.8374 | 0.8366 |
| 0.6765 | 4.0 | 1440 | 0.7586 | 0.8360 | 0.8376 |
| 0.4629 | 5.0 | 1800 | 0.7223 | 0.8397 | 0.8397 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
facebook/data2vec-audio-large-10m | 2c971412b1e8382f2b0b213b984626b5ae398f45 | 2022-04-18T16:23:58.000Z | [
"pytorch",
"data2vec-audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2202.03555",
"transformers",
"speech",
"license:apache-2.0"
]
| automatic-speech-recognition | false | facebook | null | facebook/data2vec-audio-large-10m | 7 | null | transformers | 14,318 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# Data2Vec-Audio-Large-10m
[Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/)
The large model pretrained and fine-tuned on 10 minutes of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2202.03555)
Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli
**Abstract**
While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.
The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec .
# Pre-Training method

For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Data2VecForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-large-10m")
model = Data2VecForCTC.from_pretrained("facebook/data2vec-audio-large-10m")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"],, return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
|
hackathon-pln-es/roberta-base-biomedical-es-squad2-es | 3c77e2941e7773dc5f2cbee39ffeb6503e9d598e | 2022-04-03T14:51:38.000Z | [
"pytorch",
"roberta",
"question-answering",
"es",
"dataset:squad_es",
"dataset:hackathon-pln-es/biomed_squad_es_v2",
"transformers",
"autotrain_compatible"
]
| question-answering | false | hackathon-pln-es | null | hackathon-pln-es/roberta-base-biomedical-es-squad2-es | 7 | null | transformers | 14,319 | ---
language: es
datasets:
- squad_es
- hackathon-pln-es/biomed_squad_es_v2
metrics:
- "f1"
---
# roberta-base-biomedical-es for QA
This model was trained as part of the "Extractive QA Biomedicine" project developed during the 2022 [Hackathon](https://somosnlp.org/hackathon) organized by SOMOS NLP.
## Motivation
Recent research has made available Spanish Language Models trained on Biomedical corpus. This project explores the use of these new models to generate extractive Question Answering models for Biomedicine, and compares their effectiveness with general masked language models.
The models trained during the [Hackathon](https://somosnlp.org/hackathon) were:
[hackathon-pln-es/roberta-base-bne-squad2-es](https://huggingface.co/hackathon-pln-es/roberta-base-bne-squad2-es)
[hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es](https://huggingface.co/hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es)
[hackathon-pln-es/roberta-base-biomedical-es-squad2-es](https://huggingface.co/hackathon-pln-es/roberta-base-biomedical-es-squad2-es)
[hackathon-pln-es/biomedtra-small-es-squad2-es](https://huggingface.co/hackathon-pln-es/biomedtra-small-es-squad2-es)
## Description
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-es) on the [squad_es (v2)](https://huggingface.co/datasets/squad_es) training dataset.
## Hyperparameters
The hyperparameters were chosen based on those used in [PlanTL-GOB-ES/roberta-base-bne-sqac](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-sqac), a spanish-based QA model trained on a dataset with SQUAD v1 fromat.
```
--num_train_epochs 2
--learning_rate 3e-5
--weight_decay 0.01
--max_seq_length 386
--doc_stride 128
```
## Performance
Evaluated on the [hackathon-pln-es/biomed_squad_es_v2](https://huggingface.co/datasets/hackathon-pln-es/biomed_squad_es_v2) dev set.
|Model |Base Model Domain|exact |f1 |HasAns_exact|HasAns_f1|NoAns_exact|NoAns_f1|
|--------------------------------------------------------------|-----------------|-------|-------|------------|---------|-----------|--------|
|hackathon-pln-es/roberta-base-bne-squad2-es |General |67.6341|75.6988|53.7367 |70.0526 |81.2174 |81.2174 |
|hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es|Biomedical |66.8426|75.2346|53.0249 |70.0031 |80.3478 |80.3478 |
|hackathon-pln-es/roberta-base-biomedical-es-squad2-es |Biomedical |67.6341|74.5612|47.6868 |61.7012 |87.1304 | 87.1304|
|hackathon-pln-es/biomedtra-small-es-squad2-es |Biomedical |34.4767|44.3294|45.3737 |65.307 |23.8261 |23.8261 |
## Team
Santiago Maximo: [smaximo](https://huggingface.co/smaximo) |
benjamin/roberta-large-wechsel-ukrainian | a40f97ad5dd4b638a51e0a3c124211eb4581f78d | 2022-07-13T23:43:31.000Z | [
"pytorch",
"roberta",
"fill-mask",
"uk",
"transformers",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | benjamin | null | benjamin/roberta-large-wechsel-ukrainian | 7 | null | transformers | 14,320 | ---
license: mit
language: uk
---
# roberta-large-wechsel-ukrainian
[`roberta-base`](https://huggingface.co/roberta-base) transferred to Ukrainian using the method from the NAACL2022 paper [WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models](https://aclanthology.org/2022.naacl-main.293/).
# Evaluation
Evaluation was done on [lang-uk's ner-uk project](https://github.com/lang-uk/ner-uk), the Ukrainian portion of [WikiANN](https://huggingface.co/datasets/wikiann) and the [Ukrainian IU corpus from the Universal Dependencies project](https://github.com/UniversalDependencies/UD_Ukrainian-IU). Evaluation results are the mean of 5 runs with different seeds.
__Validation Results__
| | lang-uk NER (Micro F1) | WikiANN (Micro F1) | UD Ukrainian IU POS (Accuracy) |
|:-------------------------------------------------|:-------------------------|:-------------|:-------------------------|
| roberta-base-wechsel-ukrainian | 88.06 (0.50) | 92.96 (0.08) | 98.70 (0.05) |
| roberta-large-wechsel-ukrainian | __89.27 (0.53)__ | __93.22 (0.15)__ | __98.86 (0.03)__ |
|
| roberta-base-scratch-ukrainian* | 85.49 (0.88) | 91.91 (0.08) | 98.49 (0.04) |
| roberta-large-scratch-ukrainian* | 86.54 (0.70) | 92.39 (0.16) | 98.65 (0.09) |
|
| dbmdz/electra-base-ukrainian-cased-discriminator | 87.49 (0.52) | 93.20 (0.16) | 98.60 (0.03) |
| xlm-roberta-base | 86.68 (0.44) | 92.41 (0.13) | 98.53 (0.02) |
| xlm-roberta-large | 86.64 (1.61) | 93.01 (0.13) | 98.71 (0.04) |
__Test Results__
| | lang-uk NER (Micro F1) | WikiANN (Micro F1) | UD Ukrainian IU POS (Accuracy) |
|:-------------------------------------------------|:-------------------------|:-------------|:-------------------------|
| roberta-base-wechsel-ukrainian | 90.81 (1.51) | 92.98 (0.12) | 98.57 (0.03) |
| roberta-large-wechsel-ukrainian | __91.24 (1.16)__ | __93.22 (0.17)__ | __98.74 (0.06)__ |
|
| roberta-base-scratch-ukrainian* | 89.57 (1.01) | 92.05 (0.09) | 98.31 (0.08) |
| roberta-large-scratch-ukrainian* | 89.96 (0.89) | 92.49 (0.15) | 98.52 (0.04) |
|
| dbmdz/electra-base-ukrainian-cased-discriminator | 90.43 (1.29) | 92.99 (0.11) | 98.59 (0.06) |
| xlm-roberta-base | 90.86 (0.81) | 92.27 (0.09) | 98.45 (0.07) |
| xlm-roberta-large | 90.16 (2.98) | 92.92 (0.19) | 98.71 (0.04) |
\*trained using the same exact training setup as the wechsel-\* models, but without parameter transfer from WECHSEL.
# License
MIT |
anton-l/xtreme_s_xlsr_300m_fleurs_langid_test | 7f8e824dae623a78b32228013193850694adf810 | 2022-04-04T10:59:40.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers"
]
| audio-classification | false | anton-l | null | anton-l/xtreme_s_xlsr_300m_fleurs_langid_test | 7 | null | transformers | 14,321 | Entry not found |
efederici/cross-encoder-bert-base-stsb | 083a417f617f9eb04389d91953ce1d404879e65e | 2022-04-04T17:09:02.000Z | [
"pytorch",
"bert",
"text-classification",
"it",
"dataset:stsb_multi_mt",
"transformers",
"cross-encoder",
"sentence-similarity"
]
| text-classification | false | efederici | null | efederici/cross-encoder-bert-base-stsb | 7 | null | transformers | 14,322 | ---
pipeline_tag: text-classification
language:
- it
datasets:
- stsb_multi_mt
tags:
- cross-encoder
- sentence-similarity
- transformers
---
# Cross-Encoder
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
<p align="center">
<img src="https://upload.wikimedia.org/wikipedia/commons/f/f6/Edouard_Vuillard%2C_1920c_-_Sunlit_Interior.jpg" width="400"> </br>
Edouard Vuillard, Sunlit Interior
</p>
## Training Data
This model was trained on [stsb](https://huggingface.co/datasets/stsb_multi_mt/viewer/it/train). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.
## Usage and Performance
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('efederici/cross-encoder-umberto-stsb')
scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')])
```
The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`.
|
dapang/distilbert-base-uncased-finetuned-truthful | 3991bb8e5539e6b77e7d0990d8d4da760a273e1b | 2022-04-05T07:23:56.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | dapang | null | dapang/distilbert-base-uncased-finetuned-truthful | 7 | null | transformers | 14,323 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-truthful
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-truthful
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4660
- Accuracy: 0.87
- F1: 0.8697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.910294163459086e-05
- train_batch_size: 400
- eval_batch_size: 400
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 5 | 0.6509 | 0.59 | 0.5780 |
| No log | 2.0 | 10 | 0.4950 | 0.77 | 0.7701 |
| No log | 3.0 | 15 | 0.4787 | 0.81 | 0.8099 |
| No log | 4.0 | 20 | 0.4936 | 0.81 | 0.8096 |
| No log | 5.0 | 25 | 0.4443 | 0.82 | 0.82 |
| No log | 6.0 | 30 | 0.4547 | 0.85 | 0.8497 |
| No log | 7.0 | 35 | 0.4268 | 0.85 | 0.8500 |
| No log | 8.0 | 40 | 0.4790 | 0.87 | 0.8697 |
| No log | 9.0 | 45 | 0.4660 | 0.87 | 0.8697 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1
- Datasets 2.0.0
- Tokenizers 0.11.0
|
btjiong/robbert-twitter-sentiment | 7597fd9000648604dd95084acd2e730c18834e92 | 2022-04-06T17:18:23.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:dutch_social",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | btjiong | null | btjiong/robbert-twitter-sentiment | 7 | null | transformers | 14,324 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- dutch_social
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: robbert-twitter-sentiment
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: dutch_social
type: dutch_social
args: dutch_social
metrics:
- name: Accuracy
type: accuracy
value: 0.749
- name: F1
type: f1
value: 0.7491844724992662
- name: Precision
type: precision
value: 0.7493911755249737
- name: Recall
type: recall
value: 0.749
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robbert-twitter-sentiment
This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on the dutch_social dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6818
- Accuracy: 0.749
- F1: 0.7492
- Precision: 0.7494
- Recall: 0.749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.7485 | 1.0 | 188 | 0.7670 | 0.692 | 0.6915 | 0.6920 | 0.692 |
| 0.5202 | 2.0 | 376 | 0.6818 | 0.749 | 0.7492 | 0.7494 | 0.749 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cpu
- Datasets 2.0.0
- Tokenizers 0.12.0
|
afbudiman/distilled-indobert-classification | 1ef8177a1003700f67e937987b1cc16e5c44337f | 2022-04-08T09:32:57.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:indonlu",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | afbudiman | null | afbudiman/distilled-indobert-classification | 7 | null | transformers | 14,325 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- indonlu
metrics:
- accuracy
- f1
model-index:
- name: distilled-indobert-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: indonlu
type: indonlu
args: smsa
metrics:
- name: Accuracy
type: accuracy
value: 0.9015873015873016
- name: F1
type: f1
value: 0.9014926755197933
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-indobert-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the indonlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6015
- Accuracy: 0.9016
- F1: 0.9015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0427 | 1.0 | 688 | 0.6306 | 0.8683 | 0.8684 |
| 0.5332 | 2.0 | 1376 | 0.5621 | 0.8794 | 0.8779 |
| 0.3021 | 3.0 | 2064 | 0.6785 | 0.8905 | 0.8896 |
| 0.1851 | 4.0 | 2752 | 0.6085 | 0.8968 | 0.8959 |
| 0.1152 | 5.0 | 3440 | 0.6015 | 0.9016 | 0.9015 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Damith/AraELECTRA-discriminator-SOQAL | 7f792fca12659b8b040d7ad82650d83fadc486fd | 2022-04-08T10:40:38.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | Damith | null | Damith/AraELECTRA-discriminator-SOQAL | 7 | null | transformers | 14,326 | Entry not found |
nikhedward/bart-large-cnn-finetuned-multi-news1 | f0dc0138e6249547ea8b52f07e26cbd689ff4567 | 2022-04-09T04:51:07.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:multi_news",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | nikhedward | null | nikhedward/bart-large-cnn-finetuned-multi-news1 | 7 | null | transformers | 14,327 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- multi_news
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-multi-news1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: multi_news
type: multi_news
args: default
metrics:
- name: Rouge1
type: rouge
value: 42.1215
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-multi-news1
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the multi_news dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0858
- Rouge1: 42.1215
- Rouge2: 14.9986
- Rougel: 23.4737
- Rougelsum: 36.4212
- Gen Len: 133.703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.1984 | 1.0 | 750 | 2.0858 | 42.1215 | 14.9986 | 23.4737 | 36.4212 | 133.703 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
aleksavega/t5-efficient-base-finetuned-1.2 | e9e8adcdd412e00bdc3cf824b14c4dc711086594 | 2022-04-11T12:04:08.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| summarization | false | aleksavega | null | aleksavega/t5-efficient-base-finetuned-1.2 | 7 | null | transformers | 14,328 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-efficient-base-finetuned-1.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-efficient-base-finetuned-1.2
This model is a fine-tuned version of [google/t5-efficient-base](https://huggingface.co/google/t5-efficient-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5294
- Rouge1: 62.691
- Rouge2: 55.9731
- Rougel: 60.9097
- Rougelsum: 61.4393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 4662
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.2424 | 1.0 | 1217 | 1.7042 | 34.2215 | 24.2754 | 31.7289 | 32.4237 |
| 1.7716 | 2.0 | 2434 | 1.6184 | 43.4774 | 34.0476 | 41.3691 | 41.9132 |
| 1.6324 | 3.0 | 3651 | 1.5811 | 49.1441 | 40.7935 | 47.0077 | 47.6388 |
| 1.5226 | 4.0 | 4868 | 1.5243 | 54.4769 | 46.3387 | 52.3289 | 52.9555 |
| 1.4121 | 5.0 | 6085 | 1.5040 | 56.8792 | 49.1963 | 54.7327 | 55.2805 |
| 1.331 | 6.0 | 7302 | 1.4930 | 58.6896 | 51.1683 | 56.7096 | 57.3605 |
| 1.2677 | 7.0 | 8519 | 1.4785 | 59.9285 | 52.4631 | 57.8575 | 58.4203 |
| 1.2175 | 8.0 | 9736 | 1.4839 | 60.0299 | 52.8806 | 58.0099 | 58.6348 |
| 1.1782 | 9.0 | 10953 | 1.4908 | 61.247 | 54.0887 | 59.2175 | 59.7658 |
| 1.1442 | 10.0 | 12170 | 1.4882 | 61.9895 | 54.9455 | 60.0728 | 60.5786 |
| 1.1118 | 11.0 | 13387 | 1.5061 | 62.1077 | 55.1276 | 60.2218 | 60.7475 |
| 1.081 | 12.0 | 14604 | 1.5078 | 61.6083 | 54.6805 | 59.7912 | 60.2489 |
| 1.0668 | 13.0 | 15821 | 1.5200 | 62.3075 | 55.5201 | 60.5192 | 60.9557 |
| 1.0488 | 14.0 | 17038 | 1.5344 | 62.5144 | 55.6332 | 60.6845 | 61.1715 |
| 1.0324 | 15.0 | 18255 | 1.5313 | 62.7697 | 56.0313 | 60.9298 | 61.4739 |
| 1.0302 | 16.0 | 19472 | 1.5294 | 62.691 | 55.9731 | 60.9097 | 61.4393 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
optimum/MiniLMv2-L12-H384-distilled-finetuned-clinc | cf662b985fc43f786b78909f506b09d8c723be15 | 2022-04-11T11:21:21.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | optimum | null | optimum/MiniLMv2-L12-H384-distilled-finetuned-clinc | 7 | null | transformers | 14,329 | ---
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: MiniLMv2-L12-H384-distilled-from-RoBERTa-Large-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.94
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L12-H384-distilled-from-RoBERTa-Large-distilled-clinc
This model is a fine-tuned version of [nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3479
- Accuracy: 0.94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 60 | 0.8171 | 0.2490 |
| No log | 2.0 | 120 | 0.7039 | 0.6568 |
| No log | 3.0 | 180 | 0.6067 | 0.7932 |
| 0.7269 | 4.0 | 240 | 0.5270 | 0.8674 |
| 0.7269 | 5.0 | 300 | 0.4659 | 0.9010 |
| 0.7269 | 6.0 | 360 | 0.4201 | 0.9194 |
| 0.7269 | 7.0 | 420 | 0.3867 | 0.9352 |
| 0.4426 | 8.0 | 480 | 0.3649 | 0.9352 |
| 0.4426 | 9.0 | 540 | 0.3520 | 0.9403 |
| 0.4426 | 10.0 | 600 | 0.3479 | 0.94 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
NlpHUST/Condenser-phobert-base | acdb842569b37097f91335db0f2fdfd491982a5b | 2022-04-12T14:30:53.000Z | [
"pytorch",
"tf",
"roberta",
"fill-mask",
"arxiv:2104.08253",
"arxiv:2108.05540",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | NlpHUST | null | NlpHUST/Condenser-phobert-base | 7 | null | transformers | 14,330 | # Condenser for Vietnamese
Transformer architectures for dense retrieval pre-training on vietnamese dataset. Details can be found in our papers, [Condenser: a Pre-training Architecture for Dense Retrieval](https://arxiv.org/abs/2104.08253) and [Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval
](https://arxiv.org/abs/2108.05540).
For example, to load Condenser weights,
```
from transformers import AutoModel
model = AutoModel.from_pretrained('NlpHUST/Condenser-phobert-base')
``` |
Auruncus/gpt-j-6b-8bit-ml | 3a5c3a146436446547bf6a56b9581e9305b8fffd | 2022-04-18T14:47:20.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
]
| text-generation | false | Auruncus | null | Auruncus/gpt-j-6b-8bit-ml | 7 | null | transformers | 14,331 | Entry not found |
lewtun/sagemaker-distilbert-emotion-1 | 2d79e5aa0394bd73597d62333792f46508e9ab31 | 2022-04-12T19:23:45.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | lewtun | null | lewtun/sagemaker-distilbert-emotion-1 | 7 | null | transformers | 14,332 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1651
- Accuracy: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.966 | 1.0 | 500 | 0.2497 | 0.921 |
| 0.1913 | 2.0 | 1000 | 0.1651 | 0.9325 |
| 0.1037 | 3.0 | 1500 | 0.1501 | 0.9285 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Helsinki-NLP/opus-mt-tc-big-cel-en | 54c6c217cbc72642cea7911a55f73efc14f650a8 | 2022-06-01T12:59:34.000Z | [
"pytorch",
"marian",
"text2text-generation",
"br",
"cel",
"cy",
"en",
"ga",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-cel-en | 7 | 1 | transformers | 14,333 | ---
language:
- br
- cel
- cy
- en
- ga
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-cel-en
results:
- task:
name: Translation cym-eng
type: translation
args: cym-eng
dataset:
name: flores101-devtest
type: flores_101
args: cym eng devtest
metrics:
- name: BLEU
type: bleu
value: 50.2
- task:
name: Translation gle-eng
type: translation
args: gle-eng
dataset:
name: flores101-devtest
type: flores_101
args: gle eng devtest
metrics:
- name: BLEU
type: bleu
value: 37.4
- task:
name: Translation bre-eng
type: translation
args: bre-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: bre-eng
metrics:
- name: BLEU
type: bleu
value: 36.1
- task:
name: Translation cym-eng
type: translation
args: cym-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: cym-eng
metrics:
- name: BLEU
type: bleu
value: 53.6
- task:
name: Translation gle-eng
type: translation
args: gle-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: gle-eng
metrics:
- name: BLEU
type: bleu
value: 57.7
---
# opus-mt-tc-big-cel-en
Neural machine translation model for translating from Celtic languages (cel) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-13
* source language(s): bre cym gle
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opusTCv20210807+bt_transformer-big_2022-03-13.zip)
* more information released models: [OPUS-MT cel-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cel-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"A-du emaoc’h?",
"Ta'n ushtey glen."
]
model_name = "pytorch-models/opus-mt-tc-big-cel-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Is that you?
# Ta'n ushtey glen.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-cel-en")
print(pipe("A-du emaoc’h?"))
# expected output: Is that you?
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| bre-eng | tatoeba-test-v2021-08-07 | 0.53712 | 36.1 | 383 | 2065 |
| cym-eng | tatoeba-test-v2021-08-07 | 0.69239 | 53.6 | 818 | 5563 |
| gle-eng | tatoeba-test-v2021-08-07 | 0.72087 | 57.7 | 1913 | 11190 |
| cym-eng | flores101-devtest | 0.71379 | 50.2 | 1012 | 24721 |
| gle-eng | flores101-devtest | 0.63946 | 37.4 | 1012 | 24721 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 18:36:25 EEST 2022
* port machine: LM0-400-22516.local
|
cj-mills/distilbert-base-uncased-finetuned-clinc | 028b8f56cb944e1c7e1b8f4f6265c5beeddef127 | 2022-04-14T07:21:55.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | cj-mills | null | cj-mills/distilbert-base-uncased-finetuned-clinc | 7 | null | transformers | 14,334 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9161290322580645
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7796
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2938 | 1.0 | 318 | 3.2905 | 0.7410 |
| 2.6346 | 2.0 | 636 | 1.8833 | 0.8326 |
| 1.5554 | 3.0 | 954 | 1.1650 | 0.8926 |
| 1.0189 | 4.0 | 1272 | 0.8636 | 0.9110 |
| 0.8028 | 5.0 | 1590 | 0.7796 | 0.9161 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
cj-mills/distilbert-base-uncased-distilled-clinc | 418e51c3027813c933d35683c0fd88bac69e7b44 | 2022-04-14T07:56:35.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | cj-mills | null | cj-mills/distilbert-base-uncased-distilled-clinc | 7 | null | transformers | 14,335 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9467741935483871
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2525
- Accuracy: 0.9468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2246 | 1.0 | 318 | 3.1584 | 0.7545 |
| 2.4033 | 2.0 | 636 | 1.5656 | 0.8652 |
| 1.1684 | 3.0 | 954 | 0.7795 | 0.9161 |
| 0.5693 | 4.0 | 1272 | 0.4653 | 0.9329 |
| 0.3042 | 5.0 | 1590 | 0.3412 | 0.9406 |
| 0.1794 | 6.0 | 1908 | 0.2912 | 0.9403 |
| 0.1184 | 7.0 | 2226 | 0.2654 | 0.9461 |
| 0.0873 | 8.0 | 2544 | 0.2557 | 0.9439 |
| 0.0719 | 9.0 | 2862 | 0.2549 | 0.9465 |
| 0.0646 | 10.0 | 3180 | 0.2525 | 0.9468 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
profoz/toxic-distilbert | 87e01ec6b7f4b42ee83bd4a40a546eb748c51f7f | 2022-04-15T14:17:52.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | profoz | null | profoz/toxic-distilbert | 7 | null | transformers | 14,336 | Entry not found |
xma/gptj-small-train-test | d2afd33621948135ef4e4b35d796166af9a77236 | 2022-04-15T18:42:37.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"license:ecl-2.0"
]
| text-classification | false | xma | null | xma/gptj-small-train-test | 7 | null | transformers | 14,337 | ---
license: ecl-2.0
---
|
haohaoxuexi/distilbert-base-uncased-finetuned-emotion | 29c8abe4d785db5acf90569306ad4f19e8c996a8 | 2022-04-16T06:03:18.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | haohaoxuexi | null | haohaoxuexi/distilbert-base-uncased-finetuned-emotion | 7 | null | transformers | 14,338 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9233263918743045
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2239
- Accuracy: 0.923
- F1: 0.9233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8359 | 1.0 | 250 | 0.3198 | 0.9085 | 0.9057 |
| 0.2491 | 2.0 | 500 | 0.2239 | 0.923 | 0.9233 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Raychanan/bert-bert-cased-first512-Conflict | 6e2f1160ba67545e51556f1a9fb19e977cef374a | 2022-04-16T18:39:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Raychanan | null | Raychanan/bert-bert-cased-first512-Conflict | 7 | null | transformers | 14,339 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
- precision
- recall
model-index:
- name: bert-bert-cased-first512-Conflict
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-bert-cased-first512-Conflict
`conv_text = '\n'.join([utt.text for utt in conv.get_chronological_utterance_list()])`
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6932
- F1: 0.6667
- Accuracy: 0.5
- Precision: 0.5
- Recall: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|:------:|
| 0.7098 | 1.0 | 685 | 0.6945 | 0.0 | 0.5 | 0.0 | 0.0 |
| 0.7046 | 2.0 | 1370 | 0.6997 | 0.6667 | 0.5 | 0.5 | 1.0 |
| 0.7013 | 3.0 | 2055 | 0.6949 | 0.6667 | 0.5 | 0.5 | 1.0 |
| 0.7027 | 4.0 | 2740 | 0.6931 | 0.6667 | 0.5 | 0.5 | 1.0 |
| 0.702 | 5.0 | 3425 | 0.6932 | 0.6667 | 0.5 | 0.5 | 1.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
kalex/bert-finetuned-ner | 4848b10f2633ef330fb3ee756b543a11ead674a3 | 2022-04-17T03:43:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:ncbi_disease",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | kalex | null | kalex/bert-finetuned-ner | 7 | null | transformers | 14,340 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ncbi_disease
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the ncbi_disease dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1127 | 1.0 | 680 | 0.0593 |
| 0.0442 | 2.0 | 1360 | 0.0557 |
| 0.0181 | 3.0 | 2040 | 0.0591 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
joniponi/communication-classifier | 75a52990c1865945494bd8f56c0b296c2fcd5f0c | 2022-04-18T02:09:32.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | joniponi | null | joniponi/communication-classifier | 7 | null | transformers | 14,341 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: communication-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# communication-classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1249
- eval_accuracy: 0.9644
- eval_f1: 0.9644
- eval_runtime: 2.6719
- eval_samples_per_second: 126.126
- eval_steps_per_second: 8.234
- epoch: 3.0
- step: 255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
crcb/imp_hatred | e46fd2d7391e241eaac00583096c500a43540edb | 2022-04-18T14:11:43.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:crcb/autotrain-data-imp_hs",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | crcb | null | crcb/imp_hatred | 7 | null | transformers | 14,342 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- crcb/autotrain-data-imp_hs
co2_eq_emissions: 15.91710539314839
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 753423062
- CO2 Emissions (in grams): 15.91710539314839
## Validation Metrics
- Loss: 0.5205655694007874
- Accuracy: 0.7746741154562383
- Macro F1: 0.5796696218586866
- Micro F1: 0.7746741154562382
- Weighted F1: 0.7602379277947592
- Macro Precision: 0.6976905233970596
- Micro Precision: 0.7746741154562383
- Weighted Precision: 0.7628815999440115
- Macro Recall: 0.557144871405371
- Micro Recall: 0.7746741154562383
- Weighted Recall: 0.7746741154562383
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/crcb/autotrain-imp_hs-753423062
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("crcb/autotrain-imp_hs-753423062", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("crcb/autotrain-imp_hs-753423062", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
ucabqfe/bigBird_PER_bio | 8c76b8bb81af12511f0b2b83b37a105db83f86fb | 2022-04-18T18:18:00.000Z | [
"pytorch",
"big_bird",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ucabqfe | null | ucabqfe/bigBird_PER_bio | 7 | null | transformers | 14,343 | Entry not found |
ndavid/binary-qa-bert | 8e361a60b738221f155ce67ad0d251879c2a9b81 | 2022-04-18T23:41:36.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | ndavid | null | ndavid/binary-qa-bert | 7 | null | transformers | 14,344 | Entry not found |
afbudiman/distilled-optimized-indobert-classification | d72b74ea900a96c36a3abf752a939a5980fa8c17 | 2022-04-19T16:02:44.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:indonlu",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | afbudiman | null | afbudiman/distilled-optimized-indobert-classification | 7 | null | transformers | 14,345 | ---
tags:
- generated_from_trainer
datasets:
- indonlu
metrics:
- accuracy
- f1
model-index:
- name: distilled-optimized-indobert-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: indonlu
type: indonlu
args: smsa
metrics:
- name: Accuracy
type: accuracy
value: 0.9
- name: F1
type: f1
value: 0.8994069293432798
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-optimized-indobert-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the indonlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7397
- Accuracy: 0.9
- F1: 0.8994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.315104717136378e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.128 | 1.0 | 688 | 0.8535 | 0.8913 | 0.8917 |
| 0.1475 | 2.0 | 1376 | 0.9171 | 0.8913 | 0.8913 |
| 0.0997 | 3.0 | 2064 | 0.7799 | 0.8960 | 0.8951 |
| 0.0791 | 4.0 | 2752 | 0.7179 | 0.9032 | 0.9023 |
| 0.0577 | 5.0 | 3440 | 0.6908 | 0.9063 | 0.9055 |
| 0.0406 | 6.0 | 4128 | 0.7613 | 0.8992 | 0.8986 |
| 0.0275 | 7.0 | 4816 | 0.7502 | 0.8992 | 0.8989 |
| 0.023 | 8.0 | 5504 | 0.7408 | 0.8976 | 0.8969 |
| 0.0169 | 9.0 | 6192 | 0.7397 | 0.9 | 0.8994 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
demoversion/bert-fa-base-uncased-haddad-wikinli | f218add6ab043db4b762f2744c11c5e7a440ae78 | 2022-04-21T18:17:34.000Z | [
"pytorch",
"bert",
"text-classification",
"fa",
"transformers",
"license:apache-2.0"
]
| text-classification | false | demoversion | null | demoversion/bert-fa-base-uncased-haddad-wikinli | 7 | 1 | transformers | 14,346 | ---
language: fa
license: apache-2.0
---
This repository is created with the aim to provide better models for NLI in persian, with the transparent codes for training I hope you guys find it inspiring and build better model in the future. for more details about the task and methods used for training check the [medium post](https://haddadhesam.medium.com/) and notebooks.
# Dataset
The dataset used for training is Wiki D/Similar dataset (wiki-d-similar.zip), obtained from [Sentence Transformers](https://github.com/m3hrdadfi/sentence-transformers) repository.
# Model
The proposed model is published at HuggingFace Hub with the name of ``demoversion/bert-fa-base-uncased-haddad-wikinli``. You can download and use the model from [HuggingFace Website](https://huggingface.co/demoversion/bert-fa-base-uncased-haddad-wikinli) or directly in transformers library like this:
from transformers import pipeline
model = pipeline("zero-shot-classification", model="demoversion/bert-fa-base-uncased-haddad-wikinli")
labels = ["ورزشی",
"سیاسی",
"علمی",
"فرهنگی"]
template_str = "این یک متن {} است."
str_sentence = "مرحله مقدماتی جام جهانی حاشیههای زیادی داشت."
model(str_sentence, labels, hypothesis_template=template_str)
The result of this code snippet is:
Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
{'labels': ['فرهنگی', 'علمی', 'سیاسی', 'ورزشی'],
'scores': [0.25921085476875305,
0.25713297724723816,
0.24884170293807983,
0.23481446504592896],
'sequence': 'مرحله مقدماتی جام جهانی حاشیه\u200cهای زیادی داشت.'}
Yep, the right label (highest score) without training.
# Results
The result comparing to the original model published for this dataset is available in the table bellow.
|Model|dev_accuracy| dev_f1|test_accuracy|test_f1|
|--|--|--|--|--|
|[m3hrdadfi/bert-fa-base-uncased-wikinli](https://huggingface.co/m3hrdadfi/bert-fa-base-uncased-wikinli)|77.88|77.57|76.64|75.99|
|[demoversion/bert-fa-base-uncased-haddad-wikinli](https://huggingface.co/demoversion/bert-fa-base-uncased-haddad-wikinli)|**78.62**|**79.74**|**77.04**|**78.56**|
# Notebooks
Notebooks used for training and evaluation are available below.
[Training ](https://colab.research.google.com/github/DemoVersion/persian-nli-trainer/blob/main/notebooks/training.ipynb)
[Evaluation ](https://colab.research.google.com/github/DemoVersion/persian-nli-trainer/blob/main/notebooks/evaluation.ipynb)
|
tuhailong/cross_encoder_roberta-wwm-ext_v1 | 3ac66951c2ca373cc7081624c721515e8b39f6b4 | 2022-04-20T02:41:23.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"dataset:dialogue",
"transformers",
"cross-encoder"
]
| text-classification | false | tuhailong | null | tuhailong/cross_encoder_roberta-wwm-ext_v1 | 7 | null | transformers | 14,347 | ---
language: zh
tags:
- cross-encoder
datasets:
- dialogue
---
# Data
train data is similarity sentence data from E-commerce dialogue, about 50w sentence pairs.
## Model
model created by [sentence-tansformers](https://www.sbert.net/index.html),model struct is cross-encoder, pretrained model is hfl/chinese-roberta-wwm-ext.
This model structure is as same as [tuhailong/cross_encoder_roberta-wwm-ext_v0](https://huggingface.co/tuhailong/cross_encoder_roberta-wwm-ext_v0),the difference is changing the order of input sentences and put them in train dataset, the performance is better in my dataset.
### Usage
```python
>>> from sentence_transformers.cross_encoder import CrossEncoder
>>> model = CrossEncoder(model_save_path, device="cuda", max_length=64)
>>> sentences = ["今天天气不错", "今天心情不错"]
>>> score = model.predict([sentences])
>>> print(score[0])
```
#### Code
train code from https://github.com/TTurn/cross-encoder |
tuhailong/cross_encoder_roberta-wwm-ext_v2 | cb0ca1424c3cd02fd1be9a147e959e9b64f0fd98 | 2022-04-20T02:41:07.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"dataset:dialogue",
"transformers",
"cross-encoder"
]
| text-classification | false | tuhailong | null | tuhailong/cross_encoder_roberta-wwm-ext_v2 | 7 | null | transformers | 14,348 | ---
language: zh
tags:
- cross-encoder
datasets:
- dialogue
---
# Data
train data is similarity sentence data from E-commerce dialogue, about 50w sentence pairs.
## Model
model created by [sentence-tansformers](https://www.sbert.net/index.html),model struct is cross-encoder, pretrained model is hfl/chinese-roberta-wwm-ext.
This model structure is as same as [tuhailong/cross_encoder_roberta-wwm-ext_v1](https://huggingface.co/tuhailong/cross_encoder_roberta-wwm-ext_v1),the difference is changing the epoch from 5 to 1, the performance is better in my dataset.
### Usage
```python
>>> from sentence_transformers.cross_encoder import CrossEncoder
>>> model = CrossEncoder(model_save_path, device="cuda", max_length=64)
>>> sentences = ["今天天气不错", "今天心情不错"]
>>> score = model.predict([sentences])
>>> print(score[0])
```
#### Code
train code from https://github.com/TTurn/cross-encoder |
James-kc-min/AGT_Roberta | 3abd7ddab489817b17c184e714cf6765af1d01eb | 2022-04-20T09:39:04.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | James-kc-min | null | James-kc-min/AGT_Roberta | 7 | null | transformers | 14,349 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: AGT_Roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AGT_Roberta
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.0
- Tokenizers 0.12.1
|
brad1141/Longformer_v5 | 4b6147fac5cb8316dd03ae9895b5e4fa9b1eff58 | 2022-04-20T19:13:09.000Z | [
"pytorch",
"tensorboard",
"longformer",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | brad1141 | null | brad1141/Longformer_v5 | 7 | null | transformers | 14,350 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Longformer_v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Longformer_v5
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7919
- Precision: 0.8516
- Recall: 0.8678
- F1: 0.6520
- Accuracy: 0.8259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.7744 | 1.0 | 1012 | 0.5785 | 0.8375 | 0.8501 | 0.5798 | 0.8098 |
| 0.5211 | 2.0 | 2024 | 0.5415 | 0.8434 | 0.8801 | 0.6251 | 0.8282 |
| 0.3996 | 3.0 | 3036 | 0.5565 | 0.8500 | 0.8766 | 0.6303 | 0.8274 |
| 0.2964 | 4.0 | 4048 | 0.6017 | 0.8617 | 0.8546 | 0.6415 | 0.8240 |
| 0.2187 | 5.0 | 5060 | 0.6660 | 0.8485 | 0.8718 | 0.6431 | 0.8271 |
| 0.1603 | 6.0 | 6072 | 0.7235 | 0.8493 | 0.8759 | 0.6544 | 0.8290 |
| 0.1208 | 7.0 | 7084 | 0.7919 | 0.8516 | 0.8678 | 0.6520 | 0.8259 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AntoineB/roberta-tiny-imdb | 395a16062c8898922e544bcc4c8f8d9bc369ad4a | 2022-04-21T11:45:11.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | AntoineB | null | AntoineB/roberta-tiny-imdb | 7 | null | transformers | 14,351 | Entry not found |
QuickRead/reward_model_wandb_dynamic_bs_1_idx | a74e679c73030b491b52e331088dca9068bf8139 | 2022-04-22T10:24:39.000Z | [
"pytorch",
"pegasus",
"feature-extraction",
"transformers"
]
| feature-extraction | false | QuickRead | null | QuickRead/reward_model_wandb_dynamic_bs_1_idx | 7 | null | transformers | 14,352 | Entry not found |
Saisam/gpt-neo-math-small | b748555bc45c89784c29e36cbd952118f035c375 | 2022-04-22T01:13:57.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"license:apache-2.0"
]
| text-generation | false | Saisam | null | Saisam/gpt-neo-math-small | 7 | null | transformers | 14,353 | ---
license: apache-2.0
---
# GPT-NEO-Model for Lean Tactics
In the project, we used an HuggingFace GPT-NEO small model and fine-tuned the tactic dataset. The Input should be of the form
```
<GOAL> Goal <PROOFSTEP>
```
The model can easily be accessed using the following code.
```
from transformers import GPT2Tokenizer, GPTNeoForCausalLM
import torch
tokenizer = GPT2Tokenizer.from_pretrained("Saisam/gpt-neo-math-small")
model = GPTNeoForCausalLM.from_pretrained("Saisam/gpt-neo-math-small")
```
Worked along with Xihao Xhang and Moya Zhu
|
niuca/DeepDebug | e6bc8bb8a64393e4b6c2a363aaf71c684c65106f | 2022-04-22T07:10:27.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | niuca | null | niuca/DeepDebug | 7 | null | transformers | 14,354 | Entry not found |
abdouaziiz/wav2vec2-WOLOF-2.6K-base | dfe30081849ef2b46421488c532fdb577c12586e | 2022-04-22T07:17:17.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | abdouaziiz | null | abdouaziiz/wav2vec2-WOLOF-2.6K-base | 7 | null | transformers | 14,355 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wolof
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wolof
This model is a fine-tuned version of [LeBenchmark/wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2816
- Wer: 0.3897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 20
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9468 | 1.67 | 1500 | 0.7036 | 0.6418 |
| 0.5506 | 3.33 | 3000 | 0.4129 | 0.5018 |
| 0.3817 | 5.0 | 4500 | 0.3414 | 0.4519 |
| 0.2885 | 6.67 | 6000 | 0.3181 | 0.4305 |
| 0.2275 | 8.33 | 7500 | 0.2920 | 0.4011 |
| 0.1852 | 10.0 | 9000 | 0.2816 | 0.3897 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1
|
buidung2004/wav2vec-vietnamese-number-digits-finetune | 6546a582457cc73c9ecbfeca4554b33ea284fae7 | 2022-05-03T14:16:38.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | buidung2004 | null | buidung2004/wav2vec-vietnamese-number-digits-finetune | 7 | null | transformers | 14,356 | Entry not found |
dapang/distilroberta-base-mic-sym | 1b5e930b847f04de70a13a5dfc5603c77e476d37 | 2022-04-23T03:53:15.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | dapang | null | dapang/distilroberta-base-mic-sym | 7 | null | transformers | 14,357 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilroberta-base-mic-sym
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mic-sym
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0023
- Accuracy: 0.9997
- F1: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.740146306575944e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 188 | 0.0049 | 0.9990 | 0.9990 |
| No log | 2.0 | 376 | 0.0023 | 0.9997 | 0.9997 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.0.dev20220422+cu116
- Datasets 2.1.0
- Tokenizers 0.12.1
|
allenai/aspire-biencoder-biomed-scib | 76e5d1c6f0af4d30d3b4340d6cd1affebaec44c0 | 2022-04-24T19:38:56.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2111.08366",
"transformers",
"license:apache-2.0"
]
| feature-extraction | false | allenai | null | allenai/aspire-biencoder-biomed-scib | 7 | null | transformers | 14,358 | ---
license: apache-2.0
---
## Overview
Model included in a paper for modeling fine grained similarity between documents:
**Title**: "Multi-Vector Models with Textual Guidance for Fine-Grained Scientific Document Similarity"
**Authors**: Sheshera Mysore, Arman Cohan, Tom Hope
**Paper**: https://arxiv.org/abs/2111.08366
**Github**: https://github.com/allenai/aspire
**Note**: In the context of the paper, this model is referred to as `Specter-CoCite_Scib` and represents a baseline bi-encoder for scientific document similarity. This model is similar in architecture to the [`allenai/specter`](https://github.com/allenai/specter) model but is trained on co-citation data instead of citation data.
## Model Card
### Model description
This model is a BERT bi-encoder model trained for similarity of title-abstract pairs in biomedical scientific papers. The model is **initialized with the SciBert model**. This model inputs the title and abstract of a paper and represents it with a single vector obtained by a scalar mix of the CLS token at every layer of the SciBert encoder. These scalar mix parameters can be important for performance in some datasets. Importantly, these scalar mix weights are not included as part of this HF model, if you wish to use these parameters please download the full model at: [`aspire-biencoder-biomed-scib-full.zip`](https://drive.google.com/file/d/1X6S5qwaKUlI3N3RDQSG-tJCzMBWAnqxP/view?usp=sharing).
### Training data
The model is trained on pairs of co-cited papers in a contrastive learning setup. The model is trained on 1.2 million biomedical paper pairs. In training the model negative examples for the contrastive loss are obtained as random in-batch negatives. Co-citations are obtained from the full text of papers, for example - the papers in brackets below are all co-cited and each pairs title and abstracts would be used as a training pair:
> The idea of distant supervision has been proposed and used widely in Relation Extraction (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) , where the source of labels is an external knowledge base.
### Training procedure
The model was trained with the Adam Optimizer and a learning rate of 2e-5 with 1000 warm-up steps followed by linear decay of the learning rate. The model training convergence is checked with the loss on a held out dev set consisting of co-cited paper pairs.
### Intended uses & limitations
This model is trained for document similarity tasks in **biomedical** scientific text using a single vector per document. Here, the documents are the title and abstract of a paper. With appropriate fine-tuning the model can also be used for other tasks such as classification. Since the training data comes primarily from biomedicine, performance on other domains may be poorer.
### How to use
Follow instructions for use detailed on the model github repo: https://github.com/allenai/aspire#specter-cocite
### Variable and metrics
This model is evaluated on information retrieval datasets with document level queries. Here we report performance on RELISH (biomedical/English), and TRECCOVID (biomedical/English). These are detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). These datasets represent a abstract level retrieval task, where given a query scientific abstract the task requires the retrieval of relevant candidate abstracts.
We rank documents by the L2 distance between the query and candidate documents.
### Evaluation results
The released model `aspire-biencoder-biomed-scib` (and `aspire-biencoder-biomed-scib-full`) is compared against `allenai/specter`. `aspire-biencoder-biomed-scib-full`<sup>*</sup> is the performance reported in our paper by averaging over 3 re-runs of the model. The released models `aspire-biencoder-biomed-scib` and `aspire-biencoder-biomed-scib-full` are the single best run among the 3 re-runs.
| | TRECCOVID | TRECCOVID | RELISH | RELISH |
|-------------------------------------------:|:---------:|:-------:|:------:|:-------:|
| | MAP | NDCG%20 | MAP | NDCG%20 |
| `specter` | 28.24 | 59.28 | 60.62| 77.20 |
| `aspire-biencoder-biomed-scib-full`<sup>*</sup> | 30.60 | 62.07 | 61.43| 78.01 |
| `aspire-biencoder-biomed-scib` | 30.74 | 60.16 | 61.52| 78.07 |
| `aspire-biencoder-biomed-scib-full` | 31.45 | 63.15 | 61.34| 77.89 |
**Alternative models:**
Besides the above models consider these alternative models also released in the Aspire paper:
[`aspire-biencoder-compsci-spec`](https://huggingface.co/allenai/aspire-biencoder-compsci-spec): If you wanted to run on computer science papers.
[`aspire-biencoder-biomed-spec`](https://huggingface.co/allenai/aspire-biencoder-biomed-spec): This is an alternative bi-encoder model identical to the above model, except that it is initialized with `allenai/specter` instead of SciBert. This usually under-performs the model released here. |
rdchambers/bert-finetuned-ner | 4b81e5bb92b94b5b9d9c73f5db67fbcf175b6695 | 2022-05-05T20:34:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | rdchambers | null | rdchambers/bert-finetuned-ner | 7 | null | transformers | 14,359 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0176
- Precision: 0.8418
- Recall: 0.8095
- F1: 0.8253
- Accuracy: 0.9937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 48 | 0.0268 | 0.7280 | 0.7829 | 0.7544 | 0.9908 |
| No log | 2.0 | 96 | 0.0194 | 0.8295 | 0.8050 | 0.8171 | 0.9934 |
| No log | 3.0 | 144 | 0.0176 | 0.8418 | 0.8095 | 0.8253 | 0.9937 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Reproducibility/naacl22_causalDistilBERT_instance_1 | 6b6e342cc4145642d683ccb0c92e4cbd9fe7c5be | 2022-04-23T19:50:56.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Reproducibility | null | Reproducibility/naacl22_causalDistilBERT_instance_1 | 7 | null | transformers | 14,360 | Entry not found |
avacaondata/maria-exist22-task1 | fd5a13a4f89a2f55cbd6fd1ced095886183fb6f0 | 2022-04-23T23:32:32.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | avacaondata | null | avacaondata/maria-exist22-task1 | 7 | null | transformers | 14,361 | Entry not found |
Ghost1/bert-finetuned-ner-accelerate | 3e082f9026d76d2bb1185a8433bd1a44a1396a0d | 2022-04-25T10:34:12.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Ghost1 | null | Ghost1/bert-finetuned-ner-accelerate | 7 | null | transformers | 14,362 | Entry not found |
xfbai/AMRBART-base | 9c3b31e5c1bfedec71f595bb4f7f1a9ccfca07ed | 2022-04-26T06:12:47.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"arxiv:2203.07836",
"transformers",
"AMRBART",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | xfbai | null | xfbai/AMRBART-base | 7 | null | transformers | 14,363 | ---
language: en
tags:
- AMRBART
license: mit
---
## AMRBART (base-sized model)
AMRBART model is continually pre-trained on the English text and AMR Graphs based on the BART model. It was introduced in the paper: [Graph Pre-training for AMR Parsing and Generation](https://arxiv.org/pdf/2203.07836.pdf) by bai et al. in ACL 2022 and first released in [this repository](https://github.com/muyeby/AMRBART).
## Model description
AMRBART follows the BART model which uses a transformer encoder-encoder architecture. AMRBART is pre-trained with 6 tasks:
+ learning to reconstruct the text based on the corrupted text.
+ learning to reconstruct AMR graphs based on the corrupted AMR graph.
+ learning to reconstruct the text based on the corrupted text and its corresponding AMR graph.
+ learning to reconstruct an AMR graph based on the corrupted AMR graph and its corresponding text.
+ learning to reconstruct the text based on the corrupted text and its corresponding corrupted AMR graph.
+ learning to reconstruct an AMR graph based on the corrupted AMR graph and its corresponding corrupted text.
AMRBART is particularly effective when fine-tuned for AMR parsing and AMR-to-text generation tasks.
## Training data
The AMRBART model is pre-trained on [AMR3.0](https://catalog.ldc.upenn.edu/LDC2020T02), a dataset consisting of 55,635
training instances and [English Gigaword](https://catalog.ldc.upenn.edu/LDC2003T05) (we randomly sampled 200,000 sentences).
## Intended uses & limitations
You can use the raw model for either AMR encoding or AMR parsing, but it's mostly intended to
be fine-tuned on a downstream task.
## How to use
Here is how to initialize this model in PyTorch:
```python
from transformers import BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained("xfbai/AMRBART-base")
```
Please refer to [this repository](https://github.com/muyeby/AMRBART) for tokenizer initialization and data preprocessing.
## BibTeX entry and citation info
Please cite this paper if you find this model helpful
```bibtex
@inproceedings{bai-etal-2022-graph,
title = "Graph Pre-training for {AMR} Parsing and Generation",
author = "Bai, Xuefeng and
Chen, Yulong and
Zhang, Yue",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "todo",
doi = "todo",
pages = "todo"
}
``` |
Nithiwat/fake-news-debunker | 8f22a53ce662277bb13bf361cacbafc14a0055cb | 2022-04-26T13:53:36.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:Fake and real news datasets by CLÉMENT BISAILLON",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | Nithiwat | null | Nithiwat/fake-news-debunker | 7 | 1 | transformers | 14,364 | ---
tags: autotrain
language: en
widget:
- text: "Bill Gates wants to use mass Covid-19 vaccination campaign to implant microchips to track people"
datasets:
- Fake and real news datasets by CLÉMENT BISAILLON
co2_eq_emissions: 4.415122243239347
---
# Model Trained Using AutoTrain
- Problem: Fake News Classification
- Problem type: Binary Classification
- Model ID: 785124234
- CO2 Emissions (in grams): 4.415122243239347
## Validation Metrics
- Loss: 0.00012586714001372457
- Accuracy: 0.9998886538247411
- Precision: 1.0
- Recall: 0.9997665732959851
- AUC: 0.9999999999999999
- F1: 0.999883273024396
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Nithiwat/autotrain-fake-news-classifier-785124234
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Nithiwat/autotrain-fake-news-classifier-785124234", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Nithiwat/autotrain-fake-news-classifier-785124234", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
drsis/pegasus-samsum-tb | 6fb60d780e037f6618fcc0b6ff48cff123c306b4 | 2022-04-26T02:18:42.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | drsis | null | drsis/pegasus-samsum-tb | 7 | null | transformers | 14,365 | Entry not found |
anablasi/financial_model | d916472c95596b95d033ec69e11b93b122dcaf45 | 2022-05-10T16:32:38.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | anablasi | null | anablasi/financial_model | 7 | null | transformers | 14,366 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model2
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
sangjeedondrub/tibetan-roberta-base | 1c2458923edf160701295b9b8bc6195fa7e4c9aa | 2022-05-05T02:18:22.000Z | [
"pytorch",
"roberta",
"fill-mask",
"bo",
"transformers",
"tibetan",
"pretrained language model",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | sangjeedondrub | null | sangjeedondrub/tibetan-roberta-base | 7 | 0 | transformers | 14,367 | ---
language:
- bo
tags:
- tibetan
- pretrained language model
- roberta
widget:
- text: "རྫོགས་པའི་ <mask>"
- text: "ཆོས་ཀྱི་<mask>་བ"
- text: "གངས་རིའི་ <mask>"
- text: "བོད་ཀྱི་སྨན་<mask>"
license: "mit"
---
# Demo in a `fill-mask` task
```
from transformers import AutoTokenizer, AutoModelForMaskedLM, pipeline
model_name = 'sangjeedondrub/tibetan-roberta-base'
model = AutoModelForMaskedLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
fill_mask_pipe = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer
)
samples = """རིན་ <mask>
ཆོས་ཀྱི་ <mask>
རྫོགས་པའི་ <mask>
གངས་རིའི་ <mask>
མེ་ལོང་ <mask>
བདེན་པའི་ <mask>
'འབྱུང་ <mask>""".splitlines()
for idx, sample in enumerate(samples, start=1):
outputs = fill_mask_pipe(sample)
print(idx, sample)
for output in outputs:
print(output)
```
# Output
```
1 རིན་ <mask>
{'score': 0.943362832069397, 'token': 459, 'token_str': 'ཐང', 'sequence': 'རིན་ཐང'}
{'score': 0.025716140866279602, 'token': 282, 'token_str': 'པ', 'sequence': 'རིན་པ'}
{'score': 0.004410382825881243, 'token': 596, 'token_str': 'འཕར', 'sequence': 'རིན་འཕར'}
{'score': 0.003161463886499405, 'token': 561, 'token_str': 'ཅང', 'sequence': 'རིན་ཅང'}
{'score': 0.0025683969724923372, 'token': 360, 'token_str': 'གནས', 'sequence': 'རིན་གནས'}
2 ཆོས་ཀྱི་ <mask>
{'score': 0.08558642119169235, 'token': 476, 'token_str': 'དཔལ', 'sequence': 'ཆོས་ཀྱི་དཔལ'}
{'score': 0.0616581067442894, 'token': 323, 'token_str': 'ལས', 'sequence': 'ཆོས་ཀྱི་ལས'}
{'score': 0.04617622494697571, 'token': 568, 'token_str': 'ཉམས', 'sequence': 'ཆོས་ཀྱི་ཉམས'}
{'score': 0.042447883635759354, 'token': 467, 'token_str': 'དབང', 'sequence': 'ཆོས་ཀྱི་དབང'}
{'score': 0.0358237698674202, 'token': 768, 'token_str': 'དད', 'sequence': 'ཆོས་ཀྱི་དད'}
3 རྫོགས་པའི་ <mask>
{'score': 0.06635843217372894, 'token': 323, 'token_str': 'ལས', 'sequence': 'རྫོགས་པའི་ལས'}
{'score': 0.06410858780145645, 'token': 360, 'token_str': 'གནས', 'sequence': 'རྫོགས་པའི་གནས'}
{'score': 0.0570441335439682, 'token': 573, 'token_str': 'གཏམ', 'sequence': 'རྫོགས་པའི་གཏམ'}
{'score': 0.05679900944232941, 'token': 397, 'token_str': 'ལམ', 'sequence': 'རྫོགས་པའི་ལམ'}
{'score': 0.05157950520515442, 'token': 543, 'token_str': 'མཚན', 'sequence': 'རྫོགས་པའི་མཚན'}
4 གངས་རིའི་ <mask>
{'score': 0.21429458260536194, 'token': 971, 'token_str': 'འདབས', 'sequence': 'གངས་རིའི་འདབས'}
{'score': 0.05296638607978821, 'token': 360, 'token_str': 'གནས', 'sequence': 'གངས་རིའི་གནས'}
{'score': 0.04839177057147026, 'token': 712, 'token_str': 'གངས', 'sequence': 'གངས་རིའི་གངས'}
{'score': 0.04389436915516853, 'token': 984, 'token_str': 'འདབ', 'sequence': 'གངས་རིའི་འདབ'}
{'score': 0.04158150777220726, 'token': 274, 'token_str': 'ན', 'sequence': 'གངས་རིའི་ན'}
5 མེ་ལོང་ <mask>
{'score': 0.19395706057548523, 'token': 323, 'token_str': 'ལས', 'sequence': 'མེ་ལོང་ལས'}
{'score': 0.12707622349262238, 'token': 293, 'token_str': 'དང', 'sequence': 'མེ་ལོང་དང'}
{'score': 0.08089829981327057, 'token': 280, 'token_str': 'མ', 'sequence': 'མེ་ལོང་མ'}
{'score': 0.06481984257698059, 'token': 279, 'token_str': 'ལ', 'sequence': 'མེ་ལོང་ལ'}
{'score': 0.0577043853700161, 'token': 362, 'token_str': 'ནང', 'sequence': 'མེ་ལོང་ནང'}
6 བདེན་པའི་ <mask>
{'score': 0.12633271515369415, 'token': 573, 'token_str': 'གཏམ', 'sequence': 'བདེན་པའི་གཏམ'}
{'score': 0.0909079909324646, 'token': 360, 'token_str': 'གནས', 'sequence': 'བདེན་པའི་གནས'}
{'score': 0.08624855428934097, 'token': 397, 'token_str': 'ལམ', 'sequence': 'བདེན་པའི་ལམ'}
{'score': 0.07476165890693665, 'token': 362, 'token_str': 'ནང', 'sequence': 'བདེན་པའི་ནང'}
{'score': 0.06319335103034973, 'token': 323, 'token_str': 'ལས', 'sequence': 'བདེན་པའི་ལས'}
7 'འབྱུང་ <mask>
{'score': 0.8271735906600952, 'token': 360, 'token_str': 'གནས', 'sequence': "'འབྱུང་གནས"}
{'score': 0.10802919417619705, 'token': 270, 'token_str': 'བ', 'sequence': "'འབྱུང་བ"}
{'score': 0.021947095170617104, 'token': 503, 'token_str': 'ཁམས', 'sequence': "'འབྱུང་ཁམས"}
{'score': 0.006081813480705023, 'token': 484, 'token_str': 'རབས', 'sequence': "'འབྱུང་རབས"}
{'score': 0.002384472405537963, 'token': 293, 'token_str': 'དང', 'sequence': "'འབྱུང་དང"}
```
# About
This model is trained and released by Sangjee Dondrub [sangjeedondrub at live dot com], the mere purpose of conducting these experiments is to improve my familiarity with Transformers APIs. |
jenspt/bert_classification_27_04 | 3f855f9f59c22d02d91b8f737efe1a8e521b6b29 | 2022-04-27T14:10:02.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | jenspt | null | jenspt/bert_classification_27_04 | 7 | null | transformers | 14,368 | Entry not found |
NeuML/t5-small-bashsql | 4e9bed94c0454354aa4fb2db142ba62d413d0fde | 2022-04-28T13:12:43.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | NeuML | null | NeuML/t5-small-bashsql | 7 | null | transformers | 14,369 | ---
language: en
widget:
- text: "translate Bash to SQL: find -name \"feel good story\" -mtime -1"
example_title: Last day
- text: "translate Bash to SQL: find -name \"show me sports stories\" -mtime -1 -team \"Red Sox\""
example_title: Last day with filter
- text: "translate Bash to SQL: find -name \"breaking news\" -summary"
example_title: Summary
- text: "translate Bash to SQL: find -name \"breaking news\" -translate fr"
example_title: Translate to French
inference:
parameters:
max_length: 512
license: apache-2.0
---
# T5-small finedtuned to generate txtai SQL
[T5 small](https://huggingface.co/t5-small) fine-tuned to generate [txtai](https://github.com/neuml/txtai) SQL. This model takes [Bash](https://en.wikipedia.org/wiki/Bash_(Unix_shell)) like commands and builds txtai-compatible SQL statements.
```
find -name "feel good story" -mtime -1
find -name "show me sports stories" -mtime -1 -team \"Red Sox\"
find -name "breaking news" -summary
find -name "breaking news" -translate fr
```
## Custom query syntax
This model is an example of creating a custom query syntax that can be translated into SQL txtai can understand. Any query syntax can be created. This one supports Bash-like commands but a similar strategy can be deployed to support other languages. Natural language can be translated to functions, query clauses, column selection and more.
See [t5-small-txtsql](https://huggingface.co/NeuML/t5-small-txtsql) for a model that translates natural language statements into txtai SQL.
## Model training
This model was trained using scripts that can be [found here](https://github.com/neuml/txtai/tree/master/models/bashsql).
Steps to train:
```bash
python generate.py bashsql.csv
python train.py bashsql.csv t5-small-bashsql
```
|
classla/wav2vec2-large-slavic-parlaspeech-hr | 6bd500f77d2a9f3b49a79102d4db388041be59c7 | 2022-05-18T13:58:26.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hr",
"dataset:parlaspeech-hr",
"transformers",
"audio",
"parlaspeech"
]
| automatic-speech-recognition | false | classla | null | classla/wav2vec2-large-slavic-parlaspeech-hr | 7 | 1 | transformers | 14,370 | ---
language: hr
datasets:
- parlaspeech-hr
tags:
- audio
- automatic-speech-recognition
- parlaspeech
widget:
- example_title: example 1
src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/1800.m4a
- example_title: example 2
src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/00020578b.flac.wav
- example_title: example 3
src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/00020570a.flac.wav
---
# wav2vec2-large-slavic-parlaspeech-hr
This model for Croatian ASR is based on the [facebook/wav2vec2-large-slavic-voxpopuli-v2 model](https://huggingface.co/facebook/wav2vec2-large-slavic-voxpopuli-v2) and was fine-tuned with 300 hours of recordings and transcripts from the ASR Croatian parliament dataset [ParlaSpeech-HR v1.0](http://hdl.handle.net/11356/1494).
If you use this model, please cite the following paper:
Nikola Ljubešić, Danijel Koržinek, Peter Rupnik, Ivo-Pavao Jazbec. ParlaSpeech-HR -- a freely available ASR dataset for Croatian bootstrapped from the ParlaMint corpus. Accepted at ParlaCLARIN@LREC.
## Metrics
Evaluation is performed on the dev and test portions of the [ParlaSpeech-HR v1.0](http://hdl.handle.net/11356/1494) dataset.
|split|CER|WER|
|---|---|---|
|dev|0.0311|0.0921|
|test|0.0222|0.0679|
## Usage in `transformers`
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import soundfile as sf
import torch
import os
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained(
"classla/wav2vec2-large-slavic-parlaspeech-hr")
model = Wav2Vec2ForCTC.from_pretrained("classla/wav2vec2-large-slavic-parlaspeech-hr")
# download the example wav files:
os.system("wget https://huggingface.co/classla/wav2vec2-large-slavic-parlaspeech-hr/raw/main/00020570a.flac.wav")
# read the wav file
speech, sample_rate = sf.read("00020570a.flac.wav")
input_values = processor(speech, sampling_rate=sample_rate, return_tensors="pt").input_values.to(device)
# remove the raw wav file
os.system("rm 00020570a.flac.wav")
# retrieve logits
logits = model.to(device)(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.decode(predicted_ids[0]).lower()
# transcription: 'veliki broj poslovnih subjekata posluje sa minusom velik dio'
```
## Training hyperparameters
In fine-tuning, the following arguments were used:
| arg | value |
|-------------------------------|-------|
| `per_device_train_batch_size` | 16 |
| `gradient_accumulation_steps` | 4 |
| `num_train_epochs` | 8 |
| `learning_rate` | 3e-4 |
| `warmup_steps` | 500 | |
efederici/cross-encoder-distilbert-it | 30142b713ba540668032bd736435536022468203 | 2022-05-03T13:14:47.000Z | [
"pytorch",
"distilbert",
"text-classification",
"it",
"transformers",
"cross-encoder",
"sentence-similarity",
"license:apache-2.0"
]
| text-classification | false | efederici | null | efederici/cross-encoder-distilbert-it | 7 | null | transformers | 14,371 | ---
pipeline_tag: text-classification
license: apache-2.0
language:
- it
tags:
- cross-encoder
- sentence-similarity
- transformers
---
# Cross-Encoder
The model can be used for Information Retrieval: given a query, encode the query will all possible passages. Then sort the passages in a decreasing order.
<p align="center">
<img src="https://www.exibart.com/repository/media/2020/07/bridget-riley-cool-edge.jpg" width="400"> </br>
Bridget Riley, COOL EDGE
</p>
## Training Data
This model was trained on a custom biomedical ranking dataset.
## Usage and Performance
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('efederici/cross-encoder-distilbert-it')
scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')])
```
The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`. |
lilykaw/distilbert-base-uncased-finetuned-stsb | 0f0f322c369e9a4edcf95bc841b60b4da5c1d0ca | 2022-04-28T22:46:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | lilykaw | null | lilykaw/distilbert-base-uncased-finetuned-stsb | 7 | null | transformers | 14,372 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: distilbert-base-uncased-finetuned-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8651841336703003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-stsb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5634
- Pearson: 0.8680
- Spearmanr: 0.8652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| No log | 1.0 | 360 | 0.6646 | 0.8516 | 0.8494 |
| 1.0238 | 2.0 | 720 | 0.5617 | 0.8666 | 0.8637 |
| 0.3952 | 3.0 | 1080 | 0.6533 | 0.8649 | 0.8646 |
| 0.3952 | 4.0 | 1440 | 0.5889 | 0.8651 | 0.8625 |
| 0.2488 | 5.0 | 1800 | 0.5634 | 0.8680 | 0.8652 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
thusken/nb-bert-base-target-group | bec6a71230e50ff6df31a195e1fd78da0af14dde | 2022-05-06T12:24:27.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"model-index"
]
| text-classification | false | thusken | null | thusken/nb-bert-base-target-group | 7 | null | transformers | 14,373 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: nb-bert-base-target-group
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nb-bert-base-target-group
This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2820
- Accuracy: 0.8822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2779 | 1.0 | 2032 | 0.2820 | 0.8822 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.12.1
|
chiragasarpota/scotus-bert | d93025e5c0fe0810f4f30bd5a1a9d5725916eee6 | 2022-04-29T16:36:17.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
]
| text-classification | false | chiragasarpota | null | chiragasarpota/scotus-bert | 7 | null | transformers | 14,374 | ---
license: apache-2.0
---
|
omar47/wav2vec2-large-xls-r-300m-urdu | 1b18aac1552bac1987a72c719267f2e59c38cbb4 | 2022-05-16T15:20:18.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | omar47 | null | omar47/wav2vec2-large-xls-r-300m-urdu | 7 | null | transformers | 14,375 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-urdu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-urdu
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m).
It achieves the following results on the evaluation set:
- Loss: 0.5285
- Wer: 0.1702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 16.9618 | 0.74 | 32 | 15.0745 | 1.0 |
| 9.1928 | 1.49 | 64 | 5.9361 | 1.0 |
| 4.9307 | 2.23 | 96 | 4.2924 | 1.0 |
| 3.8917 | 2.98 | 128 | 3.5873 | 1.0 |
| 3.3867 | 3.72 | 160 | 3.2594 | 1.0 |
| 3.2107 | 4.47 | 192 | 3.1718 | 1.0 |
| 3.1395 | 5.21 | 224 | 3.1281 | 1.0 |
| 3.115 | 5.95 | 256 | 3.1238 | 1.0 |
| 3.0801 | 6.7 | 288 | 3.0674 | 1.0 |
| 2.9725 | 7.44 | 320 | 2.8277 | 1.0 |
| 2.4159 | 8.19 | 352 | 1.7186 | 0.9036 |
| 1.3377 | 8.93 | 384 | 1.0271 | 0.6433 |
| 0.8591 | 9.67 | 416 | 0.8087 | 0.5441 |
| 0.726 | 10.42 | 448 | 0.7263 | 0.4634 |
| 0.6242 | 11.16 | 480 | 0.6783 | 0.4156 |
| 0.5417 | 11.91 | 512 | 0.6611 | 0.4305 |
| 0.4784 | 12.65 | 544 | 0.6300 | 0.3926 |
| 0.4198 | 13.4 | 576 | 0.5646 | 0.3499 |
| 0.3798 | 14.14 | 608 | 0.5919 | 0.3229 |
| 0.3356 | 14.88 | 640 | 0.5715 | 0.3369 |
| 0.2954 | 15.63 | 672 | 0.5325 | 0.2728 |
| 0.264 | 16.37 | 704 | 0.5535 | 0.2689 |
| 0.2535 | 17.12 | 736 | 0.5467 | 0.2366 |
| 0.2277 | 17.86 | 768 | 0.5219 | 0.2345 |
| 0.2141 | 18.6 | 800 | 0.5314 | 0.2487 |
| 0.2036 | 19.35 | 832 | 0.5382 | 0.2236 |
| 0.2021 | 20.09 | 864 | 0.5038 | 0.1922 |
| 0.1676 | 20.84 | 896 | 0.5238 | 0.2033 |
| 0.1544 | 21.58 | 928 | 0.5069 | 0.1866 |
| 0.1512 | 22.33 | 960 | 0.5045 | 0.1965 |
| 0.1512 | 23.07 | 992 | 0.5167 | 0.1862 |
| 0.1399 | 23.81 | 1024 | 0.5236 | 0.1840 |
| 0.1291 | 24.56 | 1056 | 0.5234 | 0.1957 |
| 0.1274 | 25.3 | 1088 | 0.5348 | 0.1943 |
| 0.127 | 26.05 | 1120 | 0.4978 | 0.1719 |
| 0.1105 | 26.79 | 1152 | 0.5067 | 0.1767 |
| 0.1069 | 27.53 | 1184 | 0.5150 | 0.1758 |
| 0.1058 | 28.28 | 1216 | 0.5218 | 0.1844 |
| 0.0999 | 29.02 | 1248 | 0.5375 | 0.1852 |
| 0.0964 | 29.77 | 1280 | 0.5373 | 0.1843 |
| 0.0971 | 30.51 | 1312 | 0.5190 | 0.1776 |
| 0.0906 | 31.26 | 1344 | 0.5217 | 0.1747 |
| 0.0909 | 32.0 | 1376 | 0.5204 | 0.1778 |
| 0.0784 | 32.74 | 1408 | 0.5336 | 0.1756 |
| 0.0823 | 33.49 | 1440 | 0.5281 | 0.1699 |
| 0.0834 | 34.23 | 1472 | 0.5292 | 0.1700 |
| 0.0827 | 34.98 | 1504 | 0.5285 | 0.1702 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
TehranNLP-org/bert-large-hateXplain | e9344baa4889877e63918285b4f55f1e24f5b3d9 | 2022-05-03T17:01:45.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | TehranNLP-org | null | TehranNLP-org/bert-large-hateXplain | 7 | null | transformers | 14,376 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SEED0042
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: HATEXPLAIN
type: ''
args: hatexplain
metrics:
- name: Accuracy
type: accuracy
value: 0.40790842872008326
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEED0042
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the HATEXPLAIN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7731
- Accuracy: 0.4079
- Accuracy 0: 0.8027
- Accuracy 1: 0.1869
- Accuracy 2: 0.2956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: not_parallel
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 150
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Accuracy 0 | Accuracy 1 | Accuracy 2 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:----------:|:----------:|
| No log | 1.0 | 480 | 0.8029 | 0.4235 | 0.7589 | 0.0461 | 0.5985 |
| No log | 2.0 | 960 | 0.7574 | 0.4011 | 0.7470 | 0.1831 | 0.3376 |
| No log | 3.0 | 1440 | 0.7731 | 0.4079 | 0.8027 | 0.1869 | 0.2956 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.11.6
|
cuzeverynameistaken/wav2vec2-base-timit-demo-colab0 | fe329a0253f47c7f0e7868d381459f6d3814ea67 | 2022-05-01T08:59:37.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | cuzeverynameistaken | null | cuzeverynameistaken/wav2vec2-base-timit-demo-colab0 | 7 | null | transformers | 14,377 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab0
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6960
- Wer: 0.5694
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.3196 | 13.89 | 500 | 3.1225 | 1.0 |
| 1.2756 | 27.78 | 1000 | 0.6960 | 0.5694 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
pszemraj/mGPT-Peter-2E | 7663e8d936855df0c4b9b26a06a46d7b6d54672b | 2022-05-18T17:49:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"dataset:mc4",
"dataset:Wikipedia",
"transformers",
"multilingual",
"PyTorch",
"Transformers",
"gpt3",
"Deepspeed",
"Megatron",
"mGPT",
"license:apache-2.0"
]
| text-generation | false | pszemraj | null | pszemraj/mGPT-Peter-2E | 7 | null | transformers | 14,378 | ---
license: apache-2.0
pipeline_tag: text-generation
tags:
- multilingual
- PyTorch
- Transformers
- gpt3
- gpt2
- Deepspeed
- Megatron
- mGPT
datasets:
- mc4
- Wikipedia
widget:
- text: "Ich weiß, dass du müde bist, aber können wir heute Abend noch einen Spaziergang machen? peter szemraj: ich"
example_title: "walk - Deutsch"
- text: "peter szemraj: 我喜欢穿很酷的衣服"
example_title: "fashion - Chinese"
- text: "Wat zei je over mijn moeder? peter szemraj: ik"
example_title: "🚎 - Dutch"
- text: "Zagadka: Człowiekowi, który przebywał na dworze w deszczu bez parasola czy kapelusza, nie zmoczył się ani jeden włos na głowie. Dlaczego? peter szemraj: czy to"
example_title: "brain teaser - Polish"
- text: "Minha amiga diz que conhece todas as línguas, mas não fala nenhuma delas... o que há de errado com ela? peter szemraj: eu"
example_title: "language - Portuguese"
- text: "se potesse vivere ovunque, dove sarebbe? peter szemraj: io"
example_title: "dream living place - Italian"
- text: "Can you take me for dinner somewhere nice this time? peter szemraj:"
example_title: "dinner"
- text: "What really makes you angry? peter szemraj:"
example_title: "pet peeve"
- text: "Jak nazwać aligatora, który właśnie przeszedł operację usunięcia lewego ramienia?peter szemraj: ja"
example_title: "alligator - Polish"
- text: "Warum sind Transformers für die Sprachmodellierung wichtig? peter szemraj: es ist"
example_title: "Transformers - German"
- text: "как написать хорошие подсказки для языковых моделей? peter szemraj: сначала вам нужно"
example_title: "prompt tutorial - Russian"
- text: "Pewien mężczyzna wpycha swój samochód do hotelu i mówi właścicielowi, że jest bankrutem. Dlaczego? peter szemraj: może"
example_title: "brain teaser - Polish 2"
- text: "Zagadka: Mówię bez ust i słyszę bez uszu. Nie mam ciała, ale ożywiam się wraz z wiatrem. Czym jestem? peter szemraj: czy to"
example_title: "brain teaser - Polish 3"
- text: "Què t'agrada fer per divertir-te? peter szemraj: m'agrada"
example_title: "hobbies - Catalan"
- text: "为什么你总是那么累?peter szemraj: 呃,我想"
example_title: "tired - Chinese"
inference:
parameters:
min_length: 2
max_length: 64
do_sample: True
top_k: 10
top_p: 0.9
temperature: 0.65
repetition_penalty: 3.5
no_repeat_ngram_size: 3
length_penalty: 0.4
pad_token: 1
---
# mGPT: fine-tune on message data - 2E
- This model is a fine-tuned version of [sberbank-ai/mGPT](https://huggingface.co/sberbank-ai/mGPT) on 80k messages. This builds on the minimum-working-example checkpoint [here](https://huggingface.co/pszemraj/mGPT-Peter-mwe).
- 2E = 2 epochs
## Model description
- testing if fine-tuned personality data bleeds over to other languages without being trained in them explicitly
**Interesting findings thus far:**
- Passing a generic word after the `<name-identifier>` that is in a non-English language helps ensure the model responds in the question language (see: any example).
- Model generations (in general) remain semantically consistent, even if the generations switch from `<language>`to English in the middle of the generated text. This demonstrates some sort of "universal concept understanding"
### Usage in python
Install the transformers library if you don't have it:
```
pip install -U transformers
```
load the model into a pipeline object:
```
from transformers import pipeline
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
my_chatbot = pipeline('text-generation',
'pszemraj/mGPT-Peter-2E',
device=0 if device == 'cuda' else -1,
)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1 (in addition to all training on prior checkpoints)
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
dineshmane/bert-finetuned-mrpc | 2a3780ca3d2b5a6d0c83fca7066214ab3147c0aa | 2022-05-01T17:55:58.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | dineshmane | null | dineshmane/bert-finetuned-mrpc | 7 | null | transformers | 14,379 | Entry not found |
Ghani-25/SummFinFR | 18cf0d7f07f8fffdb0cd5df889da4efd87318a34 | 2022-05-02T13:15:08.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Ghani-25 | null | Ghani-25/SummFinFR | 7 | null | transformers | 14,380 | Entry not found |
ali2066/DistilBERTFINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False | 17e7f04b5fad299a01915670cfafba1682d0f3f0 | 2022-05-02T13:33:27.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | ali2066 | null | ali2066/DistilBERTFINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False | 7 | null | transformers | 14,381 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: DistilBERTFINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERTFINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7680
- Precision: 0.9838
- Recall: 0.6632
- F1: 0.7923
- Accuracy: 0.6624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 130 | 0.2980 | 0.9315 | 0.9533 | 0.9423 | 0.9081 |
| No log | 2.0 | 260 | 0.2053 | 0.9537 | 0.9626 | 0.9581 | 0.9338 |
| No log | 3.0 | 390 | 0.1873 | 0.9464 | 0.9907 | 0.9680 | 0.9485 |
| 0.3064 | 4.0 | 520 | 0.1811 | 0.9585 | 0.9720 | 0.9652 | 0.9449 |
| 0.3064 | 5.0 | 650 | 0.1887 | 0.9587 | 0.9766 | 0.9676 | 0.9485 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/DistilBERT_FINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False | c9dfd66b1bea7eaf4edc170d8deae57807a18d21 | 2022-05-02T18:27:20.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | ali2066 | null | ali2066/DistilBERT_FINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False | 7 | null | transformers | 14,382 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: DistilBERT_FINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERT_FINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0703
- Precision: 0.9667
- Recall: 0.0505
- F1: 0.0961
- Accuracy: 0.0766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 95 | 0.5442 | 0.6667 | 0.1132 | 0.1935 | 0.75 |
| No log | 2.0 | 190 | 0.5316 | 0.5385 | 0.1321 | 0.2121 | 0.74 |
| No log | 3.0 | 285 | 0.5384 | 0.4615 | 0.2264 | 0.3038 | 0.725 |
| No log | 4.0 | 380 | 0.5503 | 0.4286 | 0.2264 | 0.2963 | 0.715 |
| No log | 5.0 | 475 | 0.5529 | 0.4286 | 0.2264 | 0.2963 | 0.715 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/DistilBERT_FINAL_ctxSentence_TRAIN_all_TEST_NULL_second_train_set_null_False | c3b8e2538407cc275c431bbaead1ef9a5039c455 | 2022-05-02T18:36:09.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | ali2066 | null | ali2066/DistilBERT_FINAL_ctxSentence_TRAIN_all_TEST_NULL_second_train_set_null_False | 7 | null | transformers | 14,383 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: DistilBERT_FINAL_ctxSentence_TRAIN_all_TEST_NULL_second_train_set_null_False
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERT_FINAL_ctxSentence_TRAIN_all_TEST_NULL_second_train_set_null_False
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0699
- Precision: 0.9942
- Recall: 0.9773
- F1: 0.9857
- Accuracy: 0.9725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 479 | 0.4036 | 0.8333 | 0.9326 | 0.8802 | 0.8054 |
| 0.5047 | 2.0 | 958 | 0.3749 | 0.8635 | 0.9339 | 0.8973 | 0.8361 |
| 0.3336 | 3.0 | 1437 | 0.3789 | 0.8862 | 0.9184 | 0.9020 | 0.8471 |
| 0.2644 | 4.0 | 1916 | 0.4024 | 0.8762 | 0.9171 | 0.8962 | 0.8371 |
| 0.2233 | 5.0 | 2395 | 0.4195 | 0.8784 | 0.9171 | 0.8973 | 0.8391 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
laituan245/molt5-base-caption2smiles | c7689836acd99876a7255256505e378110140714 | 2022-05-03T18:08:45.000Z | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:2204.11817",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | laituan245 | null | laituan245/molt5-base-caption2smiles | 7 | null | transformers | 14,384 | ---
license: apache-2.0
---
This model can be used to generate a SMILES string from an input caption.
## Example Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("laituan245/molt5-base-caption2smiles", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-base-caption2smiles')
input_text = 'The molecule is a monomethoxybenzene that is 2-methoxyphenol substituted by a hydroxymethyl group at position 4. It has a role as a plant metabolite. It is a member of guaiacols and a member of benzyl alcohols.'
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids, num_beams=5, max_length=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# The model will generate "COC1=C(C=CC(=C1)CCCO)O". The ground-truth is "COC1=C(C=CC(=C1)CO)O".
```
## Paper
For more information, please take a look at our paper.
Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817)
Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
|
tartuNLP/EstBERT_NER_v2 | fc9726c7b6ef3b876b3a826fa471e9120c19c4c3 | 2022-05-06T06:27:43.000Z | [
"pytorch",
"bert",
"token-classification",
"et",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
]
| token-classification | false | tartuNLP | null | tartuNLP/EstBERT_NER_v2 | 7 | null | transformers | 14,385 |
---
language: et
license: cc-by-4.0
widget:
- text: "Eesti President on Alar Karis."
---
# Estonian NER model based on EstBERT
This model is a fine-tuned version of [tartuNLP/EstBERT](https://huggingface.co/tartuNLP/EstBERT) on the Estonian NER dataset. The model was trained by tartuNLP, the NLP research group at the institute of Computer Science at the University of Tartu.
It achieves the following results on the test set:
- Loss: 0.3565
- Precision: 0.7612
- Recall: 0.7744
- F1: 0.7678
- Accuracy: 0.9672
The entity-level results are as follows:
| | Precision | Recall | F1 | Number |
|---------| --------- | ------- | ------- | ------- |
| DATE | 0.7278 | 0.7258 | 0.7268 | 372 |
| EVENT | 0.3721 | 0.5714 | 0.4507 | 28 |
| GPE | 0.8679 | 0.8369 | 0.8521 | 840 |
| LOC | 0.6545 | 0.4832 | 0.5560 | 149 |
| MONEY | 0.6625 | 0.6023 | 0.6310 | 88 |
| ORG | 0.6761 | 0.7267 | 0.7005 | 589 |
| PER | 0.8255 | 0.9068 | 0.8642 | 751 |
| PERCENT | 1.0 | 0.9589 | 0.9790 | 73 |
| PROD | 0.6030 | 0.5430 | 0.5714 | 221 |
| TIME | 0.5682 | 0.5556 | 0.5618 | 45 |
| TITLE | 0.7 | 0.8063 | 0.7494 | 191 |
## How to use
You can use this model with Transformers pipeline for NER. Post-processing of results may be necessary as the model occasionally tags subword tokens as entities.
```
from transformers import BertTokenizer, BertForTokenClassification
from transformers import pipeline
tokenizer = BertTokenizer.from_pretrained('tartuNLP/EstBERT_NER')
bertner = BertForTokenClassification.from_pretrained('tartuNLP/EstBERT_NER')
nlp = pipeline("ner", model=bertner, tokenizer=tokenizer)
text = "Kaia Kanepi (WTA 57.) langes USA-s Charlestonis toimuval WTA 500 kategooria tenniseturniiril konkurentsist kaheksandikfinaalis, kaotades poolatarile Magda Linette'ile (WTA 64.) 3 : 6, 6 : 4, 2 : 6."
ner_results = new_nlp(text)
tokens=tokenizer(text)
tokens=tokenizer.convert_ids_to_tokens(tokens['input_ids'])
print(f'tokens: {tokens}')
print(f'NER model:{ner_results}')
```
```
tokens: ['[CLS]', 'kai', '##a', 'kanepi', '(', 'w', '##ta', '57', '.', ')', 'langes', 'usa', '-', 's', 'cha', '##rl', '##est', '##onis', 'toimuval', 'w', '##ta', '500', 'kategooria', 'tennise', '##turniiril', 'konkurentsist', 'kaheksandik', '##finaalis', ',', 'kaotades', 'poola', '##tari', '##le', 'ma', '##gda', 'line', '##tte', "'", 'ile', '(', 'w', '##ta', '64', '.', ')', '3', ':', '6', ',', '6', ':', '4', ',', '2', ':', '6', '.', '[SEP]']
```
```
NER model: [{'entity': 'B-PER', 'score': 0.99999887, 'index': 1, 'word': 'kai', 'start': None, 'end': None}, {'entity': 'B-PER', 'score': 0.97371966, 'index': 2, 'word': '##a', 'start': None, 'end': None}, {'entity': 'I-PER', 'score': 0.99999815, 'index': 3, 'word': 'kanepi', 'start': None, 'end': None}, {'entity': 'B-ORG', 'score': 0.63085276, 'index': 5, 'word': 'w', 'start': None, 'end': None}, {'entity': 'B-GPE', 'score': 0.99999934, 'index': 11, 'word': 'usa', 'start': None, 'end': None}, {'entity': 'B-GPE', 'score': 0.9999685, 'index': 14, 'word': 'cha', 'start': None, 'end': None}, {'entity': 'I-GPE', 'score': 0.8875574, 'index': 15, 'word': '##rl', 'start': None, 'end': None}, {'entity': 'I-GPE', 'score': 0.9996168, 'index': 16, 'word': '##est', 'start': None, 'end': None}, {'entity': 'I-GPE', 'score': 0.9992657, 'index': 17, 'word': '##onis', 'start': None, 'end': None}, {'entity': 'B-EVENT', 'score': 0.99999064, 'index': 19, 'word': 'w', 'start': None, 'end': None}, {'entity': 'I-EVENT', 'score': 0.9772493, 'index': 20, 'word': '##ta', 'start': None, 'end': None}, {'entity': 'I-EVENT', 'score': 0.99999076, 'index': 21, 'word': '500', 'start': None, 'end': None}, {'entity': 'I-EVENT', 'score': 0.99955636, 'index': 22, 'word': 'kategooria', 'start': None, 'end': None}, {'entity': 'B-TITLE', 'score': 0.8771319, 'index': 30, 'word': 'poola', 'start': None, 'end': None}, {'entity': 'B-PER', 'score': 0.99999785, 'index': 33, 'word': 'ma', 'start': None, 'end': None}, {'entity': 'B-PER', 'score': 0.9998398, 'index': 34, 'word': '##gda', 'start': None, 'end': None}, {'entity': 'I-PER', 'score': 0.9999987, 'index': 35, 'word': 'line', 'start': None, 'end': None}, {'entity': 'I-PER', 'score': 0.9999976, 'index': 36, 'word': '##tte', 'start': None, 'end': None}, {'entity': 'I-PER', 'score': 0.99999285, 'index': 37, 'word': "'", 'start': None, 'end': None}, {'entity': 'I-PER', 'score': 0.9999794, 'index': 38, 'word': 'ile', 'start': None, 'end': None}, {'entity': 'B-ORG', 'score': 0.7664479, 'index': 40, 'word': 'w', 'start': None, 'end': None}]
```
## Intended uses & limitations
This model can be used to find named entities from Estonian texts. The model is free to use for anyone. TartuNLP does not guarantee that the model is useful for anyone or anything. TartuNLP is not responsible for any results it generates.
## Training and evaluation data
The model was trained on two Estonian NER datasets:
- [The Reannotated Estonian NER corpus](https://metashare.ut.ee/repository/browse/reannotated-estonian-ner-corpus/bd43f1f614a511eca6e4fa163e9d45477d086613d2894fd5af79bf13e3f13594/)
- [The New Estonian NER corpus](https://metashare.ut.ee/repository/browse/new-estonian-ner-corpus/98b6706c963c11eba6e4fa163e9d45470bcd0533b6994c93ab8b8c628516ffed/)
Both datasets have been annotated with the same annotation scheme. For training this model, the datasets were joined.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1024
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: polynomial
- max num_epochs: 150
- early stopping limit: 20
- early stopping tol: 0.0001
- mixed_precision_training: Native AMP
### Training results
The final model was saved after epoch 53 (shown in bold) where the overall F1 was the highest on the development set.
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Date Precision | Date Recall | Date F1 | Date Number | Event Precision | Event Recall | Event F1 | Event Number | Gpe Precision | Gpe Recall | Gpe F1 | Gpe Number | Loc Precision | Loc Recall | Loc F1 | Loc Number | Money Precision | Money Recall | Money F1 | Money Number | Org Precision | Org Recall | Org F1 | Org Number | Per Precision | Per Recall | Per F1 | Per Number | Percent Precision | Percent Recall | Percent F1 | Percent Number | Prod Precision | Prod Recall | Prod F1 | Prod Number | Time Precision | Time Recall | Time F1 | Time Number | Title Precision | Title Recall | Title F1 | Title Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|:--------------:|:-----------:|:-------:|:-----------:|:---------------:|:------------:|:--------:|:------------:|:-------------:|:----------:|:------:|:----------:|:-------------:|:----------:|:------:|:----------:|:---------------:|:------------:|:--------:|:------------:|:-------------:|:----------:|:------:|:----------:|:-------------:|:----------:|:------:|:----------:|:-----------------:|:--------------:|:----------:|:--------------:|:--------------:|:-----------:|:-------:|:-----------:|:--------------:|:-----------:|:-------:|:-----------:|:---------------:|:------------:|:--------:|:------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.3252 | 1 | 1061 | 0.1628 | 0.6835 | 0.6083 | 0.6437 | 0.9526 | 0.5910 | 0.6022 | 0.5965 | 372 | 0.0 | 0.0 | 0.0 | 28 | 0.8073 | 0.7631 | 0.7846 | 840 | 0.1389 | 0.0336 | 0.0541 | 149 | 0.4217 | 0.3977 | 0.4094 | 88 | 0.5381 | 0.5280 | 0.5330 | 589 | 0.7917 | 0.8655 | 0.8270 | 751 | 0.6471 | 0.3014 | 0.4112 | 73 | 0.2581 | 0.0724 | 0.1131 | 221 | 0.1429 | 0.0889 | 0.1096 | 45 | 0.7805 | 0.6702 | 0.7211 | 191 | 0.6835 | 0.6083 | 0.6437 | 0.9526 |
| 0.1513 | 2 | 2122 | 0.1332 | 0.6906 | 0.7329 | 0.7111 | 0.9615 | 0.6185 | 0.7366 | 0.6724 | 372 | 0.0857 | 0.1071 | 0.0952 | 28 | 0.7874 | 0.8595 | 0.8219 | 840 | 0.4767 | 0.2752 | 0.3489 | 149 | 0.6848 | 0.7159 | 0.7000 | 88 | 0.6158 | 0.6231 | 0.6194 | 589 | 0.7770 | 0.9001 | 0.8341 | 751 | 0.9565 | 0.9041 | 0.9296 | 73 | 0.5 | 0.3620 | 0.4199 | 221 | 0.3571 | 0.3333 | 0.3448 | 45 | 0.6033 | 0.7644 | 0.6744 | 191 | 0.6906 | 0.7329 | 0.7111 | 0.9615 |
| 0.1131 | 3 | 3183 | 0.1281 | 0.7224 | 0.7338 | 0.7280 | 0.9638 | 0.7054 | 0.7339 | 0.7194 | 372 | 0.1053 | 0.1429 | 0.1212 | 28 | 0.8013 | 0.85 | 0.8250 | 840 | 0.5476 | 0.3087 | 0.3948 | 149 | 0.6386 | 0.6023 | 0.6199 | 88 | 0.6371 | 0.6469 | 0.6420 | 589 | 0.8235 | 0.8762 | 0.8490 | 751 | 0.9859 | 0.9589 | 0.9722 | 73 | 0.5148 | 0.3937 | 0.4462 | 221 | 0.5116 | 0.4889 | 0.5 | 45 | 0.6245 | 0.7749 | 0.6916 | 191 | 0.7224 | 0.7338 | 0.7280 | 0.9638 |
| 0.0884 | 4 | 4244 | 0.1354 | 0.7283 | 0.7386 | 0.7334 | 0.9639 | 0.6785 | 0.6694 | 0.6739 | 372 | 0.1795 | 0.25 | 0.2090 | 28 | 0.8231 | 0.8310 | 0.8270 | 840 | 0.6020 | 0.3960 | 0.4777 | 149 | 0.6092 | 0.6023 | 0.6057 | 88 | 0.6473 | 0.7012 | 0.6732 | 589 | 0.8351 | 0.8628 | 0.8487 | 751 | 1.0 | 0.9726 | 0.9861 | 73 | 0.5899 | 0.4751 | 0.5263 | 221 | 0.4524 | 0.4222 | 0.4368 | 45 | 0.6 | 0.7853 | 0.6803 | 191 | 0.7283 | 0.7386 | 0.7334 | 0.9639 |
| 0.0685 | 5 | 5305 | 0.1383 | 0.7224 | 0.7696 | 0.7453 | 0.9644 | 0.6635 | 0.7473 | 0.7029 | 372 | 0.26 | 0.4643 | 0.3333 | 28 | 0.8259 | 0.8357 | 0.8308 | 840 | 0.5913 | 0.4564 | 0.5152 | 149 | 0.6437 | 0.6364 | 0.64 | 88 | 0.6540 | 0.7284 | 0.6892 | 589 | 0.8070 | 0.8961 | 0.8492 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5693 | 0.5204 | 0.5437 | 221 | 0.5192 | 0.6 | 0.5567 | 45 | 0.6320 | 0.7644 | 0.6919 | 191 | 0.7224 | 0.7696 | 0.7453 | 0.9644 |
| 0.0532 | 6 | 6366 | 0.1493 | 0.7099 | 0.7613 | 0.7347 | 0.9631 | 0.6727 | 0.6962 | 0.6843 | 372 | 0.2308 | 0.5357 | 0.3226 | 28 | 0.8242 | 0.8262 | 0.8252 | 840 | 0.5877 | 0.4497 | 0.5095 | 149 | 0.6410 | 0.5682 | 0.6024 | 88 | 0.6232 | 0.7470 | 0.6795 | 589 | 0.8087 | 0.8895 | 0.8472 | 751 | 0.9672 | 0.8082 | 0.8806 | 73 | 0.5107 | 0.5385 | 0.5242 | 221 | 0.6190 | 0.5778 | 0.5977 | 45 | 0.6371 | 0.7906 | 0.7056 | 191 | 0.7099 | 0.7613 | 0.7347 | 0.9631 |
| 0.0403 | 7 | 7427 | 0.1592 | 0.7239 | 0.7592 | 0.7411 | 0.9642 | 0.6923 | 0.7016 | 0.6969 | 372 | 0.2857 | 0.5714 | 0.3810 | 28 | 0.8272 | 0.8262 | 0.8267 | 840 | 0.5752 | 0.4362 | 0.4962 | 149 | 0.6265 | 0.5909 | 0.6082 | 88 | 0.6402 | 0.6978 | 0.6677 | 589 | 0.8404 | 0.8762 | 0.8579 | 751 | 0.9859 | 0.9589 | 0.9722 | 73 | 0.5257 | 0.6018 | 0.5612 | 221 | 0.5870 | 0.6 | 0.5934 | 45 | 0.6235 | 0.8063 | 0.7032 | 191 | 0.7239 | 0.7592 | 0.7411 | 0.9642 |
| 0.0304 | 8 | 8488 | 0.1738 | 0.7301 | 0.7484 | 0.7392 | 0.9644 | 0.6866 | 0.6774 | 0.6820 | 372 | 0.3409 | 0.5357 | 0.4167 | 28 | 0.8393 | 0.8083 | 0.8235 | 840 | 0.5882 | 0.4698 | 0.5224 | 149 | 0.6429 | 0.6136 | 0.6279 | 88 | 0.6608 | 0.6978 | 0.6788 | 589 | 0.8268 | 0.8708 | 0.8482 | 751 | 0.9595 | 0.9726 | 0.9660 | 73 | 0.5351 | 0.5520 | 0.5434 | 221 | 0.5208 | 0.5556 | 0.5376 | 45 | 0.6204 | 0.7958 | 0.6972 | 191 | 0.7301 | 0.7484 | 0.7392 | 0.9644 |
| 0.0234 | 9 | 9549 | 0.1860 | 0.7248 | 0.7625 | 0.7432 | 0.9641 | 0.6947 | 0.7097 | 0.7021 | 372 | 0.2963 | 0.5714 | 0.3902 | 28 | 0.8317 | 0.8298 | 0.8308 | 840 | 0.5913 | 0.4564 | 0.5152 | 149 | 0.6118 | 0.5909 | 0.6012 | 88 | 0.6361 | 0.7063 | 0.6693 | 589 | 0.8410 | 0.8735 | 0.8570 | 751 | 0.9859 | 0.9589 | 0.9722 | 73 | 0.5212 | 0.6109 | 0.5625 | 221 | 0.5417 | 0.5778 | 0.5591 | 45 | 0.6414 | 0.7958 | 0.7103 | 191 | 0.7248 | 0.7625 | 0.7432 | 0.9641 |
| 0.0178 | 10 | 10610 | 0.2037 | 0.7434 | 0.7383 | 0.7408 | 0.9640 | 0.7159 | 0.6774 | 0.6961 | 372 | 0.2857 | 0.4286 | 0.3429 | 28 | 0.8333 | 0.8333 | 0.8333 | 840 | 0.6262 | 0.4497 | 0.5234 | 149 | 0.6324 | 0.4886 | 0.5513 | 88 | 0.6568 | 0.6757 | 0.6661 | 589 | 0.8291 | 0.8722 | 0.8501 | 751 | 1.0 | 0.8219 | 0.9023 | 73 | 0.5672 | 0.5158 | 0.5403 | 221 | 0.5 | 0.5333 | 0.5161 | 45 | 0.6952 | 0.7644 | 0.7282 | 191 | 0.7434 | 0.7383 | 0.7408 | 0.9640 |
| 0.0147 | 11 | 11671 | 0.2114 | 0.7440 | 0.7233 | 0.7335 | 0.9643 | 0.7009 | 0.6613 | 0.6805 | 372 | 0.3030 | 0.3571 | 0.3279 | 28 | 0.8352 | 0.8024 | 0.8185 | 840 | 0.6238 | 0.4228 | 0.504 | 149 | 0.65 | 0.5909 | 0.6190 | 88 | 0.6436 | 0.6469 | 0.6452 | 589 | 0.8407 | 0.8575 | 0.8490 | 751 | 0.9315 | 0.9315 | 0.9315 | 73 | 0.5812 | 0.5023 | 0.5388 | 221 | 0.5476 | 0.5111 | 0.5287 | 45 | 0.6835 | 0.7801 | 0.7286 | 191 | 0.7440 | 0.7233 | 0.7335 | 0.9643 |
| 0.0118 | 12 | 12732 | 0.2218 | 0.7331 | 0.7532 | 0.7430 | 0.9649 | 0.7119 | 0.6909 | 0.7012 | 372 | 0.3488 | 0.5357 | 0.4225 | 28 | 0.8325 | 0.8405 | 0.8365 | 840 | 0.5303 | 0.4698 | 0.4982 | 149 | 0.65 | 0.5909 | 0.6190 | 88 | 0.6690 | 0.6587 | 0.6638 | 589 | 0.8178 | 0.8908 | 0.8528 | 751 | 0.9677 | 0.8219 | 0.8889 | 73 | 0.5408 | 0.5701 | 0.5551 | 221 | 0.5102 | 0.5556 | 0.5319 | 45 | 0.6567 | 0.8010 | 0.7217 | 191 | 0.7331 | 0.7532 | 0.7430 | 0.9649 |
| 0.0093 | 13 | 13793 | 0.2283 | 0.7495 | 0.7359 | 0.7427 | 0.9644 | 0.7163 | 0.6989 | 0.7075 | 372 | 0.3810 | 0.5714 | 0.4571 | 28 | 0.8612 | 0.7905 | 0.8243 | 840 | 0.6111 | 0.4430 | 0.5136 | 149 | 0.6145 | 0.5795 | 0.5965 | 88 | 0.6775 | 0.6740 | 0.6757 | 589 | 0.8346 | 0.8802 | 0.8568 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.5619 | 0.5339 | 0.5476 | 221 | 0.4 | 0.4889 | 0.4400 | 45 | 0.6812 | 0.7382 | 0.7085 | 191 | 0.7495 | 0.7359 | 0.7427 | 0.9644 |
| 0.0079 | 14 | 14854 | 0.2383 | 0.7371 | 0.7490 | 0.7430 | 0.9647 | 0.6727 | 0.7016 | 0.6868 | 372 | 0.3261 | 0.5357 | 0.4054 | 28 | 0.8453 | 0.8 | 0.8220 | 840 | 0.5963 | 0.4362 | 0.5039 | 149 | 0.625 | 0.5682 | 0.5952 | 88 | 0.6634 | 0.6927 | 0.6777 | 589 | 0.8433 | 0.8815 | 0.8620 | 751 | 0.9853 | 0.9178 | 0.9504 | 73 | 0.5427 | 0.5747 | 0.5582 | 221 | 0.5814 | 0.5556 | 0.5682 | 45 | 0.6513 | 0.8115 | 0.7226 | 191 | 0.7371 | 0.7490 | 0.7430 | 0.9647 |
| 0.0068 | 15 | 15915 | 0.2511 | 0.7255 | 0.7359 | 0.7306 | 0.9639 | 0.6826 | 0.6532 | 0.6676 | 372 | 0.3590 | 0.5 | 0.4179 | 28 | 0.8295 | 0.8167 | 0.8230 | 840 | 0.5263 | 0.4698 | 0.4965 | 149 | 0.6575 | 0.5455 | 0.5963 | 88 | 0.6549 | 0.6604 | 0.6577 | 589 | 0.8242 | 0.8802 | 0.8513 | 751 | 0.9833 | 0.8082 | 0.8872 | 73 | 0.5398 | 0.5520 | 0.5459 | 221 | 0.36 | 0.4 | 0.3789 | 45 | 0.6511 | 0.8010 | 0.7183 | 191 | 0.7255 | 0.7359 | 0.7306 | 0.9639 |
| 0.0061 | 16 | 16976 | 0.2497 | 0.7253 | 0.7690 | 0.7465 | 0.9648 | 0.6824 | 0.6989 | 0.6906 | 372 | 0.3333 | 0.5357 | 0.4110 | 28 | 0.8473 | 0.8321 | 0.8396 | 840 | 0.4583 | 0.5168 | 0.4858 | 149 | 0.6494 | 0.5682 | 0.6061 | 88 | 0.6556 | 0.7368 | 0.6938 | 589 | 0.8382 | 0.8828 | 0.8599 | 751 | 0.9841 | 0.8493 | 0.9118 | 73 | 0.5341 | 0.6380 | 0.5814 | 221 | 0.5 | 0.5333 | 0.5161 | 45 | 0.6622 | 0.7801 | 0.7163 | 191 | 0.7253 | 0.7690 | 0.7465 | 0.9648 |
| 0.0054 | 17 | 18037 | 0.2554 | 0.7323 | 0.7625 | 0.7471 | 0.9650 | 0.6870 | 0.6962 | 0.6916 | 372 | 0.3421 | 0.4643 | 0.3939 | 28 | 0.8463 | 0.8262 | 0.8361 | 840 | 0.5902 | 0.4832 | 0.5314 | 149 | 0.6753 | 0.5909 | 0.6303 | 88 | 0.6640 | 0.7148 | 0.6885 | 589 | 0.8317 | 0.8948 | 0.8621 | 751 | 0.9437 | 0.9178 | 0.9306 | 73 | 0.5210 | 0.5611 | 0.5403 | 221 | 0.5 | 0.5111 | 0.5055 | 45 | 0.6102 | 0.8115 | 0.6966 | 191 | 0.7323 | 0.7625 | 0.7471 | 0.9650 |
| 0.005 | 18 | 19098 | 0.2601 | 0.7273 | 0.7747 | 0.7503 | 0.9654 | 0.6970 | 0.7608 | 0.7275 | 372 | 0.2830 | 0.5357 | 0.3704 | 28 | 0.8320 | 0.8488 | 0.8403 | 840 | 0.5841 | 0.4430 | 0.5038 | 149 | 0.6477 | 0.6477 | 0.6477 | 88 | 0.6378 | 0.6995 | 0.6672 | 589 | 0.8501 | 0.8908 | 0.8700 | 751 | 0.9722 | 0.9589 | 0.9655 | 73 | 0.5323 | 0.5973 | 0.5629 | 221 | 0.4444 | 0.4444 | 0.4444 | 45 | 0.624 | 0.8168 | 0.7075 | 191 | 0.7273 | 0.7747 | 0.7503 | 0.9654 |
| 0.0044 | 19 | 20159 | 0.2602 | 0.7369 | 0.7616 | 0.7490 | 0.9656 | 0.7124 | 0.7124 | 0.7124 | 372 | 0.3415 | 0.5 | 0.4058 | 28 | 0.8239 | 0.8631 | 0.8430 | 840 | 0.6355 | 0.4564 | 0.5313 | 149 | 0.6667 | 0.6136 | 0.6391 | 88 | 0.6517 | 0.6638 | 0.6577 | 589 | 0.8405 | 0.8842 | 0.8618 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5144 | 0.5656 | 0.5388 | 221 | 0.5217 | 0.5333 | 0.5275 | 45 | 0.6550 | 0.7853 | 0.7143 | 191 | 0.7369 | 0.7616 | 0.7490 | 0.9656 |
| 0.004 | 20 | 21220 | 0.2677 | 0.7347 | 0.7702 | 0.7520 | 0.9658 | 0.7374 | 0.7097 | 0.7233 | 372 | 0.2857 | 0.4286 | 0.3429 | 28 | 0.8466 | 0.8345 | 0.8405 | 840 | 0.6050 | 0.4832 | 0.5373 | 149 | 0.6667 | 0.6136 | 0.6391 | 88 | 0.6593 | 0.7131 | 0.6852 | 589 | 0.8240 | 0.8975 | 0.8591 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.4981 | 0.5837 | 0.5375 | 221 | 0.5102 | 0.5556 | 0.5319 | 45 | 0.6371 | 0.8272 | 0.7198 | 191 | 0.7347 | 0.7702 | 0.7520 | 0.9658 |
| 0.0034 | 21 | 22281 | 0.2743 | 0.7386 | 0.7717 | 0.7548 | 0.9657 | 0.6984 | 0.7097 | 0.704 | 372 | 0.3784 | 0.5 | 0.4308 | 28 | 0.8475 | 0.8333 | 0.8403 | 840 | 0.6333 | 0.5101 | 0.5651 | 149 | 0.6190 | 0.5909 | 0.6047 | 88 | 0.6512 | 0.7385 | 0.6921 | 589 | 0.8428 | 0.8921 | 0.8668 | 751 | 0.9846 | 0.8767 | 0.9275 | 73 | 0.5513 | 0.5837 | 0.5670 | 221 | 0.5106 | 0.5333 | 0.5217 | 45 | 0.6379 | 0.8115 | 0.7143 | 191 | 0.7386 | 0.7717 | 0.7548 | 0.9657 |
| 0.0033 | 22 | 23342 | 0.2788 | 0.7418 | 0.7520 | 0.7469 | 0.9652 | 0.7143 | 0.6989 | 0.7065 | 372 | 0.3182 | 0.5 | 0.3889 | 28 | 0.8367 | 0.8298 | 0.8332 | 840 | 0.6168 | 0.4430 | 0.5156 | 149 | 0.6235 | 0.6023 | 0.6127 | 88 | 0.6758 | 0.6689 | 0.6724 | 589 | 0.8327 | 0.8815 | 0.8564 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5458 | 0.5928 | 0.5683 | 221 | 0.4783 | 0.4889 | 0.4835 | 45 | 0.6637 | 0.7853 | 0.7194 | 191 | 0.7418 | 0.7520 | 0.7469 | 0.9652 |
| 0.0033 | 23 | 24403 | 0.2831 | 0.7342 | 0.7535 | 0.7437 | 0.9650 | 0.6981 | 0.6962 | 0.6972 | 372 | 0.3784 | 0.5 | 0.4308 | 28 | 0.8499 | 0.8024 | 0.8255 | 840 | 0.5034 | 0.4966 | 0.5 | 149 | 0.6067 | 0.6136 | 0.6102 | 88 | 0.6581 | 0.6961 | 0.6766 | 589 | 0.8350 | 0.8961 | 0.8645 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5424 | 0.5792 | 0.5602 | 221 | 0.3774 | 0.4444 | 0.4082 | 45 | 0.7048 | 0.7749 | 0.7382 | 191 | 0.7342 | 0.7535 | 0.7437 | 0.9650 |
| 0.0029 | 24 | 25464 | 0.2931 | 0.7544 | 0.7380 | 0.7461 | 0.9648 | 0.7365 | 0.6989 | 0.7172 | 372 | 0.3590 | 0.5 | 0.4179 | 28 | 0.8535 | 0.7976 | 0.8246 | 840 | 0.5849 | 0.4161 | 0.4863 | 149 | 0.6622 | 0.5568 | 0.6049 | 88 | 0.6672 | 0.6706 | 0.6689 | 589 | 0.8474 | 0.8802 | 0.8635 | 751 | 0.9701 | 0.8904 | 0.9286 | 73 | 0.5550 | 0.5475 | 0.5513 | 221 | 0.4889 | 0.4889 | 0.4889 | 45 | 0.7023 | 0.7906 | 0.7438 | 191 | 0.7544 | 0.7380 | 0.7461 | 0.9648 |
| 0.0028 | 25 | 26525 | 0.2899 | 0.7489 | 0.7574 | 0.7531 | 0.9654 | 0.7021 | 0.7097 | 0.7059 | 372 | 0.3902 | 0.5714 | 0.4638 | 28 | 0.8635 | 0.8131 | 0.8375 | 840 | 0.6182 | 0.4564 | 0.5251 | 149 | 0.6471 | 0.625 | 0.6358 | 88 | 0.6613 | 0.6995 | 0.6799 | 589 | 0.8454 | 0.9028 | 0.8731 | 751 | 0.9583 | 0.9452 | 0.9517 | 73 | 0.5681 | 0.5475 | 0.5576 | 221 | 0.4222 | 0.4222 | 0.4222 | 45 | 0.6608 | 0.7853 | 0.7177 | 191 | 0.7489 | 0.7574 | 0.7531 | 0.9654 |
| 0.0023 | 26 | 27586 | 0.2922 | 0.7413 | 0.7532 | 0.7472 | 0.9649 | 0.6897 | 0.6989 | 0.6943 | 372 | 0.35 | 0.5 | 0.4118 | 28 | 0.85 | 0.8298 | 0.8398 | 840 | 0.6161 | 0.4631 | 0.5287 | 149 | 0.6486 | 0.5455 | 0.5926 | 88 | 0.6486 | 0.6927 | 0.6700 | 589 | 0.8457 | 0.8828 | 0.8638 | 751 | 0.9853 | 0.9178 | 0.9504 | 73 | 0.5636 | 0.5611 | 0.5624 | 221 | 0.3958 | 0.4222 | 0.4086 | 45 | 0.6638 | 0.7958 | 0.7238 | 191 | 0.7413 | 0.7532 | 0.7472 | 0.9649 |
| 0.0021 | 27 | 28647 | 0.2967 | 0.7514 | 0.7568 | 0.7541 | 0.9656 | 0.7081 | 0.7043 | 0.7062 | 372 | 0.3659 | 0.5357 | 0.4348 | 28 | 0.8547 | 0.8190 | 0.8365 | 840 | 0.5641 | 0.4430 | 0.4962 | 149 | 0.6582 | 0.5909 | 0.6228 | 88 | 0.6677 | 0.7097 | 0.6881 | 589 | 0.8459 | 0.8842 | 0.8646 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.5806 | 0.5701 | 0.5753 | 221 | 0.4898 | 0.5333 | 0.5106 | 45 | 0.7089 | 0.7906 | 0.7475 | 191 | 0.7514 | 0.7568 | 0.7541 | 0.9656 |
| 0.0025 | 28 | 29708 | 0.2957 | 0.7335 | 0.7622 | 0.7475 | 0.9651 | 0.7060 | 0.7231 | 0.7145 | 372 | 0.3077 | 0.4286 | 0.3582 | 28 | 0.8459 | 0.8429 | 0.8444 | 840 | 0.5069 | 0.4899 | 0.4983 | 149 | 0.6438 | 0.5341 | 0.5839 | 88 | 0.6838 | 0.7012 | 0.6924 | 589 | 0.8413 | 0.8895 | 0.8647 | 751 | 0.9552 | 0.8767 | 0.9143 | 73 | 0.4901 | 0.5611 | 0.5232 | 221 | 0.3818 | 0.4667 | 0.42 | 45 | 0.6580 | 0.7958 | 0.7204 | 191 | 0.7335 | 0.7622 | 0.7475 | 0.9651 |
| 0.0023 | 29 | 30769 | 0.3049 | 0.7455 | 0.7544 | 0.7499 | 0.9654 | 0.6997 | 0.7392 | 0.7190 | 372 | 0.3182 | 0.5 | 0.3889 | 28 | 0.8483 | 0.8119 | 0.8297 | 840 | 0.5630 | 0.5101 | 0.5352 | 149 | 0.6579 | 0.5682 | 0.6098 | 88 | 0.6791 | 0.7114 | 0.6949 | 589 | 0.8583 | 0.8628 | 0.8606 | 751 | 0.9853 | 0.9178 | 0.9504 | 73 | 0.5234 | 0.5566 | 0.5395 | 221 | 0.4565 | 0.4667 | 0.4615 | 45 | 0.7009 | 0.7853 | 0.7407 | 191 | 0.7455 | 0.7544 | 0.7499 | 0.9654 |
| 0.0018 | 30 | 31830 | 0.3042 | 0.7415 | 0.7679 | 0.7544 | 0.9654 | 0.6935 | 0.7419 | 0.7169 | 372 | 0.3333 | 0.5 | 0.4 | 28 | 0.8563 | 0.8226 | 0.8391 | 840 | 0.5878 | 0.5168 | 0.55 | 149 | 0.6582 | 0.5909 | 0.6228 | 88 | 0.6677 | 0.7470 | 0.7051 | 589 | 0.8544 | 0.8828 | 0.8684 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.5300 | 0.5204 | 0.5251 | 221 | 0.4375 | 0.4667 | 0.4516 | 45 | 0.6417 | 0.8063 | 0.7146 | 191 | 0.7415 | 0.7679 | 0.7544 | 0.9654 |
| 0.0017 | 31 | 32891 | 0.3071 | 0.7540 | 0.7481 | 0.7510 | 0.9660 | 0.7083 | 0.7312 | 0.7196 | 372 | 0.4054 | 0.5357 | 0.4615 | 28 | 0.8552 | 0.8226 | 0.8386 | 840 | 0.6311 | 0.4362 | 0.5159 | 149 | 0.6220 | 0.5795 | 0.6 | 88 | 0.6734 | 0.6757 | 0.6746 | 589 | 0.8626 | 0.8775 | 0.8700 | 751 | 0.9855 | 0.9315 | 0.9577 | 73 | 0.5307 | 0.5475 | 0.5390 | 221 | 0.3830 | 0.4 | 0.3913 | 45 | 0.7019 | 0.7644 | 0.7318 | 191 | 0.7540 | 0.7481 | 0.7510 | 0.9660 |
| 0.0018 | 32 | 33952 | 0.3190 | 0.7499 | 0.7553 | 0.7526 | 0.9656 | 0.7182 | 0.7124 | 0.7152 | 372 | 0.3333 | 0.5357 | 0.4110 | 28 | 0.8586 | 0.7952 | 0.8257 | 840 | 0.6116 | 0.4966 | 0.5481 | 149 | 0.6463 | 0.6023 | 0.6235 | 88 | 0.6805 | 0.6978 | 0.6890 | 589 | 0.8360 | 0.8895 | 0.8619 | 751 | 0.9855 | 0.9315 | 0.9577 | 73 | 0.5633 | 0.5837 | 0.5733 | 221 | 0.5106 | 0.5333 | 0.5217 | 45 | 0.6711 | 0.8010 | 0.7303 | 191 | 0.7499 | 0.7553 | 0.7526 | 0.9656 |
| 0.0018 | 33 | 35013 | 0.3094 | 0.7460 | 0.7774 | 0.7614 | 0.9665 | 0.7147 | 0.7473 | 0.7306 | 372 | 0.3659 | 0.5357 | 0.4348 | 28 | 0.8556 | 0.8393 | 0.8474 | 840 | 0.6273 | 0.4631 | 0.5328 | 149 | 0.6506 | 0.6136 | 0.6316 | 88 | 0.6787 | 0.7351 | 0.7058 | 589 | 0.8344 | 0.8988 | 0.8654 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5702 | 0.6063 | 0.5877 | 221 | 0.3036 | 0.3778 | 0.3366 | 45 | 0.6567 | 0.8010 | 0.7217 | 191 | 0.7460 | 0.7774 | 0.7614 | 0.9665 |
| 0.0015 | 34 | 36074 | 0.3091 | 0.7441 | 0.7759 | 0.7597 | 0.9665 | 0.7113 | 0.7285 | 0.7198 | 372 | 0.3404 | 0.5714 | 0.4267 | 28 | 0.8266 | 0.8512 | 0.8387 | 840 | 0.5405 | 0.5369 | 0.5387 | 149 | 0.6707 | 0.625 | 0.6471 | 88 | 0.6856 | 0.7182 | 0.7015 | 589 | 0.8517 | 0.8868 | 0.8689 | 751 | 1.0 | 0.9452 | 0.9718 | 73 | 0.5752 | 0.5882 | 0.5817 | 221 | 0.3878 | 0.4222 | 0.4043 | 45 | 0.6830 | 0.8010 | 0.7373 | 191 | 0.7441 | 0.7759 | 0.7597 | 0.9665 |
| 0.0015 | 35 | 37135 | 0.3185 | 0.7487 | 0.7619 | 0.7552 | 0.9660 | 0.6982 | 0.7339 | 0.7156 | 372 | 0.3415 | 0.5 | 0.4058 | 28 | 0.8685 | 0.8179 | 0.8424 | 840 | 0.5504 | 0.4765 | 0.5108 | 149 | 0.6353 | 0.6136 | 0.6243 | 88 | 0.6636 | 0.7267 | 0.6937 | 589 | 0.8654 | 0.8815 | 0.8734 | 751 | 1.0 | 0.9315 | 0.9645 | 73 | 0.55 | 0.5475 | 0.5488 | 221 | 0.3673 | 0.4 | 0.3830 | 45 | 0.6937 | 0.8063 | 0.7458 | 191 | 0.7487 | 0.7619 | 0.7552 | 0.9660 |
| 0.0015 | 36 | 38196 | 0.3203 | 0.7438 | 0.7649 | 0.7542 | 0.9660 | 0.6961 | 0.7204 | 0.7081 | 372 | 0.3659 | 0.5357 | 0.4348 | 28 | 0.8617 | 0.8381 | 0.8497 | 840 | 0.5203 | 0.5168 | 0.5185 | 149 | 0.6667 | 0.5909 | 0.6265 | 88 | 0.6710 | 0.7063 | 0.6882 | 589 | 0.8495 | 0.8868 | 0.8678 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.5561 | 0.5385 | 0.5471 | 221 | 0.42 | 0.4667 | 0.4421 | 45 | 0.6568 | 0.8115 | 0.7260 | 191 | 0.7438 | 0.7649 | 0.7542 | 0.9660 |
| 0.0013 | 37 | 39257 | 0.3298 | 0.7315 | 0.7732 | 0.7518 | 0.9656 | 0.6915 | 0.7231 | 0.7070 | 372 | 0.3333 | 0.5714 | 0.4211 | 28 | 0.8654 | 0.8190 | 0.8416 | 840 | 0.4793 | 0.5436 | 0.5094 | 149 | 0.6582 | 0.5909 | 0.6228 | 88 | 0.6656 | 0.7267 | 0.6948 | 589 | 0.8289 | 0.9028 | 0.8642 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5574 | 0.5928 | 0.5746 | 221 | 0.4043 | 0.4222 | 0.4130 | 45 | 0.6408 | 0.8220 | 0.7202 | 191 | 0.7315 | 0.7732 | 0.7518 | 0.9656 |
| 0.0012 | 38 | 40318 | 0.3311 | 0.7533 | 0.7610 | 0.7571 | 0.9664 | 0.7060 | 0.7231 | 0.7145 | 372 | 0.3571 | 0.5357 | 0.4286 | 28 | 0.8613 | 0.8357 | 0.8483 | 840 | 0.6339 | 0.4765 | 0.5441 | 149 | 0.6543 | 0.6023 | 0.6272 | 88 | 0.6528 | 0.7182 | 0.6839 | 589 | 0.8424 | 0.8828 | 0.8622 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.6031 | 0.5294 | 0.5639 | 221 | 0.4130 | 0.4222 | 0.4176 | 45 | 0.7122 | 0.7644 | 0.7374 | 191 | 0.7533 | 0.7610 | 0.7571 | 0.9664 |
| 0.0012 | 39 | 41379 | 0.3328 | 0.7444 | 0.7553 | 0.7498 | 0.9657 | 0.6818 | 0.7258 | 0.7031 | 372 | 0.3478 | 0.5714 | 0.4324 | 28 | 0.8561 | 0.8143 | 0.8347 | 840 | 0.6055 | 0.4430 | 0.5116 | 149 | 0.6582 | 0.5909 | 0.6228 | 88 | 0.6715 | 0.7046 | 0.6877 | 589 | 0.8461 | 0.8708 | 0.8583 | 751 | 0.9706 | 0.9041 | 0.9362 | 73 | 0.5665 | 0.5973 | 0.5815 | 221 | 0.4082 | 0.4444 | 0.4255 | 45 | 0.6770 | 0.8010 | 0.7338 | 191 | 0.7444 | 0.7553 | 0.7498 | 0.9657 |
| 0.0014 | 40 | 42440 | 0.3415 | 0.7421 | 0.7437 | 0.7429 | 0.9641 | 0.6931 | 0.7043 | 0.6987 | 372 | 0.3488 | 0.5357 | 0.4225 | 28 | 0.8422 | 0.8262 | 0.8341 | 840 | 0.6190 | 0.4362 | 0.5118 | 149 | 0.6622 | 0.5568 | 0.6049 | 88 | 0.6888 | 0.6350 | 0.6608 | 589 | 0.8175 | 0.8828 | 0.8489 | 751 | 1.0 | 0.9178 | 0.9571 | 73 | 0.5584 | 0.5837 | 0.5708 | 221 | 0.4043 | 0.4222 | 0.4130 | 45 | 0.6580 | 0.7958 | 0.7204 | 191 | 0.7421 | 0.7437 | 0.7429 | 0.9641 |
| 0.0013 | 41 | 43501 | 0.3401 | 0.7501 | 0.7487 | 0.7494 | 0.9651 | 0.6915 | 0.7231 | 0.7070 | 372 | 0.3421 | 0.4643 | 0.3939 | 28 | 0.8545 | 0.8179 | 0.8358 | 840 | 0.6346 | 0.4430 | 0.5217 | 149 | 0.6812 | 0.5341 | 0.5987 | 88 | 0.6728 | 0.6808 | 0.6768 | 589 | 0.8380 | 0.8748 | 0.8560 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.5860 | 0.5701 | 0.5780 | 221 | 0.4423 | 0.5111 | 0.4742 | 45 | 0.6787 | 0.7853 | 0.7282 | 191 | 0.7501 | 0.7487 | 0.7494 | 0.9651 |
| 0.0011 | 42 | 44562 | 0.3468 | 0.7426 | 0.7687 | 0.7554 | 0.9650 | 0.6965 | 0.7527 | 0.7235 | 372 | 0.3488 | 0.5357 | 0.4225 | 28 | 0.8667 | 0.8202 | 0.8428 | 840 | 0.6408 | 0.4430 | 0.5238 | 149 | 0.6709 | 0.6023 | 0.6347 | 88 | 0.6902 | 0.7148 | 0.7023 | 589 | 0.8404 | 0.8975 | 0.8680 | 751 | 0.9444 | 0.9315 | 0.9379 | 73 | 0.5191 | 0.6154 | 0.5631 | 221 | 0.3469 | 0.3778 | 0.3617 | 45 | 0.6210 | 0.8063 | 0.7016 | 191 | 0.7426 | 0.7687 | 0.7554 | 0.9650 |
| 0.0015 | 43 | 45623 | 0.3440 | 0.7566 | 0.7422 | 0.7493 | 0.9648 | 0.6937 | 0.7366 | 0.7145 | 372 | 0.3846 | 0.5357 | 0.4478 | 28 | 0.8608 | 0.8095 | 0.8344 | 840 | 0.6082 | 0.3960 | 0.4797 | 149 | 0.7 | 0.5568 | 0.6203 | 88 | 0.6766 | 0.6570 | 0.6667 | 589 | 0.8317 | 0.8881 | 0.8590 | 751 | 0.9701 | 0.8904 | 0.9286 | 73 | 0.6224 | 0.5520 | 0.5851 | 221 | 0.3913 | 0.4 | 0.3956 | 45 | 0.7081 | 0.7749 | 0.74 | 191 | 0.7566 | 0.7422 | 0.7493 | 0.9648 |
| 0.0011 | 44 | 46684 | 0.3354 | 0.7565 | 0.7640 | 0.7602 | 0.9664 | 0.7062 | 0.7366 | 0.7211 | 372 | 0.3659 | 0.5357 | 0.4348 | 28 | 0.8483 | 0.8452 | 0.8468 | 840 | 0.6095 | 0.4295 | 0.5039 | 149 | 0.6883 | 0.6023 | 0.6424 | 88 | 0.6880 | 0.6740 | 0.6810 | 589 | 0.8517 | 0.8948 | 0.8727 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.6238 | 0.5928 | 0.6079 | 221 | 0.3830 | 0.4 | 0.3913 | 45 | 0.65 | 0.8168 | 0.7239 | 191 | 0.7565 | 0.7640 | 0.7602 | 0.9664 |
| 0.0011 | 45 | 47745 | 0.3347 | 0.7485 | 0.7622 | 0.7553 | 0.9655 | 0.7088 | 0.7392 | 0.7237 | 372 | 0.3636 | 0.5714 | 0.4444 | 28 | 0.8603 | 0.8286 | 0.8441 | 840 | 0.5882 | 0.4698 | 0.5224 | 149 | 0.6023 | 0.6023 | 0.6023 | 88 | 0.6770 | 0.6689 | 0.6729 | 589 | 0.8417 | 0.8921 | 0.8662 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.6037 | 0.5928 | 0.5982 | 221 | 0.4583 | 0.4889 | 0.4731 | 45 | 0.6275 | 0.8115 | 0.7078 | 191 | 0.7485 | 0.7622 | 0.7553 | 0.9655 |
| 0.0011 | 46 | 48806 | 0.3421 | 0.7481 | 0.7640 | 0.7559 | 0.9657 | 0.7261 | 0.7339 | 0.7299 | 372 | 0.3171 | 0.4643 | 0.3768 | 28 | 0.8570 | 0.8202 | 0.8382 | 840 | 0.5691 | 0.4698 | 0.5147 | 149 | 0.6429 | 0.6136 | 0.6279 | 88 | 0.6769 | 0.7114 | 0.6937 | 589 | 0.8311 | 0.8908 | 0.8599 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5714 | 0.5611 | 0.5662 | 221 | 0.5 | 0.5556 | 0.5263 | 45 | 0.6638 | 0.7958 | 0.7238 | 191 | 0.7481 | 0.7640 | 0.7559 | 0.9657 |
| 0.0009 | 47 | 49867 | 0.3487 | 0.7496 | 0.7604 | 0.7550 | 0.9656 | 0.7158 | 0.7043 | 0.7100 | 372 | 0.3409 | 0.5357 | 0.4167 | 28 | 0.86 | 0.8190 | 0.8390 | 840 | 0.5496 | 0.4832 | 0.5143 | 149 | 0.7162 | 0.6023 | 0.6543 | 88 | 0.6745 | 0.7284 | 0.7004 | 589 | 0.8346 | 0.8802 | 0.8568 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5566 | 0.5339 | 0.5450 | 221 | 0.5349 | 0.5111 | 0.5227 | 45 | 0.6828 | 0.8115 | 0.7416 | 191 | 0.7496 | 0.7604 | 0.7550 | 0.9656 |
| 0.0009 | 48 | 50928 | 0.3470 | 0.7414 | 0.7649 | 0.7529 | 0.9651 | 0.7092 | 0.7473 | 0.7277 | 372 | 0.3333 | 0.5357 | 0.4110 | 28 | 0.8541 | 0.8226 | 0.8381 | 840 | 0.5847 | 0.4631 | 0.5169 | 149 | 0.6835 | 0.6136 | 0.6467 | 88 | 0.6801 | 0.7148 | 0.6970 | 589 | 0.8319 | 0.8895 | 0.8597 | 751 | 0.9571 | 0.9178 | 0.9371 | 73 | 0.5307 | 0.5475 | 0.5390 | 221 | 0.4583 | 0.4889 | 0.4731 | 45 | 0.6364 | 0.8063 | 0.7113 | 191 | 0.7414 | 0.7649 | 0.7529 | 0.9651 |
| 0.0011 | 49 | 51989 | 0.3389 | 0.7435 | 0.7664 | 0.7547 | 0.9659 | 0.6957 | 0.7312 | 0.7130 | 372 | 0.3590 | 0.5 | 0.4179 | 28 | 0.8561 | 0.8286 | 0.8421 | 840 | 0.6636 | 0.4899 | 0.5637 | 149 | 0.6136 | 0.6136 | 0.6136 | 88 | 0.6732 | 0.6995 | 0.6861 | 589 | 0.8251 | 0.8921 | 0.8573 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5746 | 0.5928 | 0.5835 | 221 | 0.4348 | 0.4444 | 0.4396 | 45 | 0.6390 | 0.8063 | 0.7130 | 191 | 0.7435 | 0.7664 | 0.7547 | 0.9659 |
| 0.0009 | 50 | 53050 | 0.3557 | 0.7490 | 0.7640 | 0.7564 | 0.9659 | 0.6948 | 0.6855 | 0.6901 | 372 | 0.3947 | 0.5357 | 0.4545 | 28 | 0.8584 | 0.8298 | 0.8438 | 840 | 0.6455 | 0.4765 | 0.5483 | 149 | 0.6933 | 0.5909 | 0.6380 | 88 | 0.6745 | 0.7317 | 0.7020 | 589 | 0.8296 | 0.8948 | 0.8610 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.6082 | 0.5339 | 0.5687 | 221 | 0.4043 | 0.4222 | 0.4130 | 45 | 0.6270 | 0.8272 | 0.7133 | 191 | 0.7490 | 0.7640 | 0.7564 | 0.9659 |
| 0.0008 | 51 | 54111 | 0.3492 | 0.7516 | 0.7601 | 0.7558 | 0.9662 | 0.7104 | 0.6989 | 0.7046 | 372 | 0.3714 | 0.4643 | 0.4127 | 28 | 0.8545 | 0.8321 | 0.8432 | 840 | 0.6496 | 0.5101 | 0.5714 | 149 | 0.625 | 0.5682 | 0.5952 | 88 | 0.6722 | 0.6893 | 0.6806 | 589 | 0.8413 | 0.8895 | 0.8647 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5611 | 0.5611 | 0.5611 | 221 | 0.4792 | 0.5111 | 0.4946 | 45 | 0.6724 | 0.8168 | 0.7376 | 191 | 0.7516 | 0.7601 | 0.7558 | 0.9662 |
| 0.0008 | 52 | 55172 | 0.3432 | 0.7526 | 0.7625 | 0.7575 | 0.9661 | 0.7044 | 0.7366 | 0.7201 | 372 | 0.3571 | 0.5357 | 0.4286 | 28 | 0.8610 | 0.8262 | 0.8433 | 840 | 0.6140 | 0.4698 | 0.5323 | 149 | 0.6667 | 0.5909 | 0.6265 | 88 | 0.6766 | 0.6927 | 0.6846 | 589 | 0.8403 | 0.8895 | 0.8642 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5849 | 0.5611 | 0.5727 | 221 | 0.46 | 0.5111 | 0.4842 | 45 | 0.6681 | 0.8115 | 0.7329 | 191 | 0.7526 | 0.7625 | 0.7575 | 0.9661 |
| **0.0006** | **53** | **56233** | **0.3565** | **0.7615** | **0.7747** | **0.7681** | **0.9672** | **0.7305** | **0.7285** | **0.7295** | **372** | **0.3721** | **0.5714** | **0.4507** | **28** | **0.8679** | **0.8369** | **0.8521** | **840** | **0.6545** | **0.4832** | **0.5560** | **149** | **0.6625** | **0.6023** | **0.6310** | **88** | **0.6761** | **0.7267** | **0.7005** | **589** | **0.8255** | **0.9068** | **0.8642** | **751** | **1.0** | **0.9589** | **0.9790** | **73** | **0.6030** | **0.5430** | **0.5714** | **221** | **0.5682** | **0.5556** | **0.5618** | **45** | **0.7** | **0.8063** | **0.7494** | **191** | **0.7615** | **0.7747** | **0.7681** | **0.9672** |
| 0.0008 | 54 | 57294 | 0.3480 | 0.7590 | 0.7631 | 0.7610 | 0.9668 | 0.7452 | 0.7312 | 0.7381 | 372 | 0.3409 | 0.5357 | 0.4167 | 28 | 0.8589 | 0.8190 | 0.8385 | 840 | 0.5935 | 0.4899 | 0.5368 | 149 | 0.7027 | 0.5909 | 0.6420 | 88 | 0.6924 | 0.6842 | 0.6883 | 589 | 0.8432 | 0.8948 | 0.8682 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5856 | 0.5882 | 0.5869 | 221 | 0.5102 | 0.5556 | 0.5319 | 45 | 0.6513 | 0.8115 | 0.7226 | 191 | 0.7590 | 0.7631 | 0.7610 | 0.9668 |
| 0.0008 | 55 | 58355 | 0.3568 | 0.7601 | 0.7622 | 0.7612 | 0.9663 | 0.7228 | 0.7151 | 0.7189 | 372 | 0.3571 | 0.5357 | 0.4286 | 28 | 0.8429 | 0.8429 | 0.8429 | 840 | 0.6634 | 0.4497 | 0.536 | 149 | 0.7 | 0.5568 | 0.6203 | 88 | 0.6828 | 0.7165 | 0.6993 | 589 | 0.8655 | 0.8828 | 0.8741 | 751 | 0.9853 | 0.9178 | 0.9504 | 73 | 0.5909 | 0.5294 | 0.5585 | 221 | 0.5106 | 0.5333 | 0.5217 | 45 | 0.6429 | 0.8010 | 0.7133 | 191 | 0.7601 | 0.7622 | 0.7612 | 0.9663 |
| 0.0009 | 56 | 59416 | 0.3498 | 0.7542 | 0.7580 | 0.7561 | 0.9661 | 0.7178 | 0.7043 | 0.7110 | 372 | 0.3409 | 0.5357 | 0.4167 | 28 | 0.8379 | 0.8429 | 0.8404 | 840 | 0.6634 | 0.4497 | 0.536 | 149 | 0.6322 | 0.625 | 0.6286 | 88 | 0.6895 | 0.6825 | 0.6860 | 589 | 0.8513 | 0.8842 | 0.8674 | 751 | 0.9577 | 0.9315 | 0.9444 | 73 | 0.5613 | 0.5385 | 0.5497 | 221 | 0.5111 | 0.5111 | 0.5111 | 45 | 0.6667 | 0.8063 | 0.7299 | 191 | 0.7542 | 0.7580 | 0.7561 | 0.9661 |
| 0.0007 | 57 | 60477 | 0.3486 | 0.7479 | 0.7711 | 0.7593 | 0.9663 | 0.7143 | 0.7392 | 0.7266 | 372 | 0.3571 | 0.5357 | 0.4286 | 28 | 0.8417 | 0.8417 | 0.8417 | 840 | 0.5923 | 0.5168 | 0.5520 | 149 | 0.6667 | 0.6136 | 0.6391 | 88 | 0.6720 | 0.7165 | 0.6935 | 589 | 0.8562 | 0.8802 | 0.8680 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5670 | 0.5747 | 0.5708 | 221 | 0.4583 | 0.4889 | 0.4731 | 45 | 0.6623 | 0.8010 | 0.7251 | 191 | 0.7479 | 0.7711 | 0.7593 | 0.9663 |
| 0.0007 | 58 | 61538 | 0.3497 | 0.7539 | 0.7744 | 0.7640 | 0.9667 | 0.7143 | 0.7392 | 0.7266 | 372 | 0.3659 | 0.5357 | 0.4348 | 28 | 0.8449 | 0.8429 | 0.8439 | 840 | 0.6429 | 0.4832 | 0.5517 | 149 | 0.6667 | 0.5909 | 0.6265 | 88 | 0.6708 | 0.7267 | 0.6976 | 589 | 0.8499 | 0.8975 | 0.8731 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.6108 | 0.5611 | 0.5849 | 221 | 0.5 | 0.4889 | 0.4944 | 45 | 0.6525 | 0.8063 | 0.7213 | 191 | 0.7539 | 0.7744 | 0.7640 | 0.9667 |
| 0.0008 | 59 | 62599 | 0.3581 | 0.7474 | 0.7762 | 0.7615 | 0.9662 | 0.7183 | 0.7473 | 0.7325 | 372 | 0.3409 | 0.5357 | 0.4167 | 28 | 0.8439 | 0.8429 | 0.8434 | 840 | 0.5467 | 0.5503 | 0.5485 | 149 | 0.6709 | 0.6023 | 0.6347 | 88 | 0.6693 | 0.7250 | 0.6960 | 589 | 0.8454 | 0.8881 | 0.8662 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5961 | 0.5475 | 0.5708 | 221 | 0.5 | 0.5333 | 0.5161 | 45 | 0.6769 | 0.8115 | 0.7381 | 191 | 0.7474 | 0.7762 | 0.7615 | 0.9662 |
| 0.0007 | 60 | 63660 | 0.3636 | 0.7494 | 0.7676 | 0.7584 | 0.9662 | 0.7016 | 0.7204 | 0.7109 | 372 | 0.3488 | 0.5357 | 0.4225 | 28 | 0.8489 | 0.8357 | 0.8422 | 840 | 0.6 | 0.4832 | 0.5353 | 149 | 0.6538 | 0.5795 | 0.6145 | 88 | 0.6828 | 0.7199 | 0.7008 | 589 | 0.8476 | 0.8815 | 0.8642 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5579 | 0.5882 | 0.5727 | 221 | 0.4762 | 0.4444 | 0.4598 | 45 | 0.6797 | 0.8220 | 0.7441 | 191 | 0.7494 | 0.7676 | 0.7584 | 0.9662 |
| 0.0008 | 61 | 64721 | 0.3646 | 0.7538 | 0.7574 | 0.7556 | 0.9660 | 0.6854 | 0.7204 | 0.7025 | 372 | 0.3659 | 0.5357 | 0.4348 | 28 | 0.8573 | 0.8369 | 0.8470 | 840 | 0.6306 | 0.4698 | 0.5385 | 149 | 0.6667 | 0.5909 | 0.6265 | 88 | 0.6896 | 0.6978 | 0.6937 | 589 | 0.8495 | 0.8722 | 0.8607 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5728 | 0.5520 | 0.5622 | 221 | 0.375 | 0.4 | 0.3871 | 45 | 0.6830 | 0.8010 | 0.7373 | 191 | 0.7538 | 0.7574 | 0.7556 | 0.9660 |
| 0.0006 | 62 | 65782 | 0.3697 | 0.7510 | 0.7460 | 0.7485 | 0.9651 | 0.6885 | 0.7070 | 0.6976 | 372 | 0.4286 | 0.5357 | 0.4762 | 28 | 0.8663 | 0.7869 | 0.8247 | 840 | 0.5902 | 0.4832 | 0.5314 | 149 | 0.6757 | 0.5682 | 0.6173 | 88 | 0.6667 | 0.6927 | 0.6794 | 589 | 0.8432 | 0.8948 | 0.8682 | 751 | 0.9851 | 0.9041 | 0.9429 | 73 | 0.5829 | 0.5566 | 0.5694 | 221 | 0.3673 | 0.4 | 0.3830 | 45 | 0.6995 | 0.7801 | 0.7376 | 191 | 0.7510 | 0.7460 | 0.7485 | 0.9651 |
| 0.0006 | 63 | 66843 | 0.3661 | 0.7504 | 0.7502 | 0.7503 | 0.9655 | 0.6909 | 0.6909 | 0.6909 | 372 | 0.4286 | 0.5357 | 0.4762 | 28 | 0.8571 | 0.8143 | 0.8352 | 840 | 0.5814 | 0.5034 | 0.5396 | 149 | 0.6582 | 0.5909 | 0.6228 | 88 | 0.7013 | 0.6655 | 0.6829 | 589 | 0.8348 | 0.8948 | 0.8638 | 751 | 0.9571 | 0.9178 | 0.9371 | 73 | 0.5570 | 0.5747 | 0.5657 | 221 | 0.3830 | 0.4 | 0.3913 | 45 | 0.6786 | 0.7958 | 0.7325 | 191 | 0.7504 | 0.7502 | 0.7503 | 0.9655 |
| 0.0006 | 64 | 67904 | 0.3711 | 0.7404 | 0.7628 | 0.7514 | 0.9656 | 0.6911 | 0.7097 | 0.7003 | 372 | 0.3784 | 0.5 | 0.4308 | 28 | 0.8455 | 0.8405 | 0.8430 | 840 | 0.6 | 0.5034 | 0.5474 | 149 | 0.65 | 0.5909 | 0.6190 | 88 | 0.6667 | 0.7029 | 0.6843 | 589 | 0.8350 | 0.8961 | 0.8645 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5673 | 0.5339 | 0.5501 | 221 | 0.2917 | 0.3111 | 0.3011 | 45 | 0.6568 | 0.8115 | 0.7260 | 191 | 0.7404 | 0.7628 | 0.7514 | 0.9656 |
| 0.0007 | 65 | 68965 | 0.3672 | 0.7377 | 0.7696 | 0.7533 | 0.9661 | 0.7005 | 0.7419 | 0.7206 | 372 | 0.3333 | 0.5357 | 0.4110 | 28 | 0.8433 | 0.8393 | 0.8413 | 840 | 0.5839 | 0.5369 | 0.5594 | 149 | 0.6506 | 0.6136 | 0.6316 | 88 | 0.6840 | 0.7131 | 0.6983 | 589 | 0.8412 | 0.8815 | 0.8609 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5427 | 0.5747 | 0.5582 | 221 | 0.3019 | 0.3556 | 0.3265 | 45 | 0.6360 | 0.7958 | 0.7070 | 191 | 0.7377 | 0.7696 | 0.7533 | 0.9661 |
| 0.0005 | 66 | 70026 | 0.3768 | 0.7496 | 0.7520 | 0.7508 | 0.9657 | 0.6903 | 0.7070 | 0.6985 | 372 | 0.3415 | 0.5 | 0.4058 | 28 | 0.8454 | 0.8333 | 0.8393 | 840 | 0.6372 | 0.4832 | 0.5496 | 149 | 0.6795 | 0.6023 | 0.6386 | 88 | 0.6914 | 0.6655 | 0.6782 | 589 | 0.8483 | 0.8788 | 0.8633 | 751 | 0.9577 | 0.9315 | 0.9444 | 73 | 0.5714 | 0.5792 | 0.5753 | 221 | 0.3 | 0.3333 | 0.3158 | 45 | 0.6696 | 0.7958 | 0.7273 | 191 | 0.7496 | 0.7520 | 0.7508 | 0.9657 |
| 0.0007 | 67 | 71087 | 0.3682 | 0.7461 | 0.7664 | 0.7561 | 0.9656 | 0.7094 | 0.7285 | 0.7188 | 372 | 0.3409 | 0.5357 | 0.4167 | 28 | 0.8563 | 0.8369 | 0.8465 | 840 | 0.6290 | 0.5235 | 0.5714 | 149 | 0.6974 | 0.6023 | 0.6463 | 88 | 0.6935 | 0.6876 | 0.6905 | 589 | 0.8363 | 0.8842 | 0.8595 | 751 | 0.9437 | 0.9178 | 0.9306 | 73 | 0.5175 | 0.6018 | 0.5565 | 221 | 0.4694 | 0.5111 | 0.4894 | 45 | 0.6483 | 0.8010 | 0.7166 | 191 | 0.7461 | 0.7664 | 0.7561 | 0.9656 |
| 0.0005 | 68 | 72148 | 0.3815 | 0.7590 | 0.7416 | 0.7502 | 0.9654 | 0.7092 | 0.7016 | 0.7054 | 372 | 0.4054 | 0.5357 | 0.4615 | 28 | 0.8489 | 0.8095 | 0.8288 | 840 | 0.6796 | 0.4698 | 0.5556 | 149 | 0.6456 | 0.5795 | 0.6108 | 88 | 0.6801 | 0.6570 | 0.6684 | 589 | 0.8476 | 0.8815 | 0.8642 | 751 | 0.9571 | 0.9178 | 0.9371 | 73 | 0.615 | 0.5566 | 0.5843 | 221 | 0.4348 | 0.4444 | 0.4396 | 45 | 0.6759 | 0.7644 | 0.7174 | 191 | 0.7590 | 0.7416 | 0.7502 | 0.9654 |
| 0.0006 | 69 | 73209 | 0.3919 | 0.7494 | 0.7487 | 0.7491 | 0.9650 | 0.6888 | 0.6962 | 0.6925 | 372 | 0.3590 | 0.5 | 0.4179 | 28 | 0.8416 | 0.8095 | 0.8252 | 840 | 0.5865 | 0.5235 | 0.5532 | 149 | 0.6901 | 0.5568 | 0.6164 | 88 | 0.6950 | 0.6808 | 0.6878 | 589 | 0.8490 | 0.8908 | 0.8694 | 751 | 1.0 | 0.9041 | 0.9496 | 73 | 0.5662 | 0.5611 | 0.5636 | 221 | 0.3265 | 0.3556 | 0.3404 | 45 | 0.6881 | 0.7853 | 0.7335 | 191 | 0.7494 | 0.7487 | 0.7491 | 0.9650 |
| 0.0006 | 70 | 74270 | 0.3704 | 0.7587 | 0.7619 | 0.7603 | 0.9666 | 0.6891 | 0.7151 | 0.7018 | 372 | 0.3947 | 0.5357 | 0.4545 | 28 | 0.8376 | 0.8536 | 0.8455 | 840 | 0.6697 | 0.4899 | 0.5659 | 149 | 0.6420 | 0.5909 | 0.6154 | 88 | 0.7018 | 0.6791 | 0.6903 | 589 | 0.8491 | 0.8842 | 0.8663 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.6219 | 0.5656 | 0.5924 | 221 | 0.3913 | 0.4 | 0.3956 | 45 | 0.6802 | 0.7906 | 0.7312 | 191 | 0.7587 | 0.7619 | 0.7603 | 0.9666 |
| 0.0005 | 71 | 75331 | 0.3841 | 0.7501 | 0.7634 | 0.7567 | 0.9659 | 0.7005 | 0.6855 | 0.6929 | 372 | 0.4054 | 0.5357 | 0.4615 | 28 | 0.8531 | 0.8298 | 0.8413 | 840 | 0.6293 | 0.4899 | 0.5509 | 149 | 0.6410 | 0.5682 | 0.6024 | 88 | 0.6774 | 0.7165 | 0.6964 | 589 | 0.8264 | 0.9001 | 0.8617 | 751 | 0.9706 | 0.9041 | 0.9362 | 73 | 0.5882 | 0.5882 | 0.5882 | 221 | 0.4545 | 0.4444 | 0.4494 | 45 | 0.6864 | 0.7906 | 0.7348 | 191 | 0.7501 | 0.7634 | 0.7567 | 0.9659 |
| 0.0005 | 72 | 76392 | 0.3830 | 0.7605 | 0.7496 | 0.7550 | 0.9655 | 0.7036 | 0.6828 | 0.6930 | 372 | 0.3824 | 0.4643 | 0.4194 | 28 | 0.8618 | 0.8238 | 0.8424 | 840 | 0.6542 | 0.4698 | 0.5469 | 149 | 0.6582 | 0.5909 | 0.6228 | 88 | 0.6935 | 0.6723 | 0.6828 | 589 | 0.8476 | 0.8815 | 0.8642 | 751 | 0.9577 | 0.9315 | 0.9444 | 73 | 0.5830 | 0.5882 | 0.5856 | 221 | 0.4043 | 0.4222 | 0.4130 | 45 | 0.6892 | 0.8010 | 0.7409 | 191 | 0.7605 | 0.7496 | 0.7550 | 0.9655 |
| 0.0006 | 73 | 77453 | 0.3839 | 0.7611 | 0.7547 | 0.7579 | 0.9661 | 0.712 | 0.7177 | 0.7149 | 372 | 0.3429 | 0.4286 | 0.3810 | 28 | 0.8494 | 0.8393 | 0.8443 | 840 | 0.6542 | 0.4698 | 0.5469 | 149 | 0.6538 | 0.5795 | 0.6145 | 88 | 0.6877 | 0.6655 | 0.6764 | 589 | 0.8428 | 0.8921 | 0.8668 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.6257 | 0.5294 | 0.5735 | 221 | 0.4468 | 0.4667 | 0.4565 | 45 | 0.6814 | 0.8063 | 0.7386 | 191 | 0.7611 | 0.7547 | 0.7579 | 0.9661 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
pietrolesci/t5v1_1-base-mnli | 1cc56642ced2f861390ae57d00dcd0cd703a204b | 2022-05-03T14:53:23.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | pietrolesci | null | pietrolesci/t5v1_1-base-mnli | 7 | null | transformers | 14,386 | ## Overview
T5-Base v1.1 model trained to generate hypotheses given a premise and a label. Below the settings used to train it
```yaml
Experiment configurations
├── datasets
│ └── mnli_train:
│ dataset_name: multi_nli
│ dataset_config_name: null
│ cache_dir: null
│ input_fields:
│ - premise
│ - hypothesis
│ target_field: label
│ train_subset_names: null
│ val_subset_names: validation_matched
│ test_subset_names: none
│ train_val_split: null
│ limit_train_samples: null
│ limit_val_samples: null
│ limit_test_samples: null
│ sampling_kwargs:
│ sampling_strategy: random
│ seed: 42
│ replace: false
│ align_labels_with_mapping: null
│ avoid_consistency_check: false
│ predict_label_mapping: null
│ mnli:
│ dataset_name: multi_nli
│ dataset_config_name: null
│ cache_dir: null
│ input_fields:
│ - premise
│ - hypothesis
│ target_field: label
│ train_subset_names: none
│ val_subset_names: none
│ test_subset_names: validation_mismatched
│ train_val_split: null
│ limit_train_samples: null
│ limit_val_samples: null
│ limit_test_samples: null
│ sampling_kwargs:
│ sampling_strategy: random
│ seed: 42
│ replace: false
│ align_labels_with_mapping: null
│ avoid_consistency_check: false
│ predict_label_mapping: null
│
├── data
│ └── _target_: src.task.nli.data.NLIGenerationData.from_config
│ main_dataset_name: null
│ use_additional_as_test: null
│ dataloader:
│ batch_size: 64
│ eval_batch_size: 100
│ num_workers: 16
│ pin_memory: true
│ drop_last: false
│ persistent_workers: false
│ shuffle: true
│ seed_dataloader: 42
│ replacement: false
│ processing:
│ preprocessing_num_workers: 16
│ preprocessing_batch_size: 1000
│ load_from_cache_file: true
│ padding: longest
│ truncation: longest_first
│ max_source_length: 128
│ max_target_length: 128
│ template: 'premise: $premise $label hypothesis: '
│ tokenizer:
│ _target_: transformers.AutoTokenizer.from_pretrained
│ pretrained_model_name_or_path: google/t5-v1_1-base
│ use_fast: true
│
├── task
│ └── optimizer:
│ name: Adafactor
│ lr: 0.001
│ weight_decay: 0.0
│ no_decay:
│ - bias
│ - LayerNorm.weight
│ decay_rate: -0.8
│ clip_threshold: 1.0
│ relative_step: false
│ scale_parameter: false
│ warmup_init: false
│ scheduler:
│ name: constant_schedule
│ model:
│ model_name_or_path: google/t5-v1_1-base
│ checkpoint_path: null
│ freeze: false
│ seed_init_weight: 42
│ _target_: src.task.nli.NLIGenerationTask.from_config
│ generation:
│ max_length: 128
│ min_length: 3
│ do_sample: true
│ early_stopping: false
│ num_beams: 1
│ temperature: 1.0
│ top_k: 50
│ top_p: 0.95
│ repetition_penalty: null
│ length_penalty: null
│ no_repeat_ngram_size: null
│ encoder_no_repeat_ngram_size: null
│ num_return_sequences: 1
│ max_time: null
│ max_new_tokens: null
│ decoder_start_token_id: null
│ use_cache: null
│ num_beam_groups: null
│ diversity_penalty: null
│
├── trainer
│ └── _target_: pytorch_lightning.Trainer
│ callbacks:
│ lr_monitor:
│ _target_: pytorch_lightning.callbacks.LearningRateMonitor
│ logging_interval: step
│ log_momentum: false
│ model_checkpoint:
│ _target_: pytorch_lightning.callbacks.ModelCheckpoint
│ dirpath: ./checkpoints/
│ filename: nli_generator_mnli-epoch={epoch:02d}-val_loss={val/aggregated_loss:.2f}
│ monitor: val/aggregated_loss
│ mode: min
│ verbose: false
│ save_last: true
│ save_top_k: 1
│ auto_insert_metric_name: false
│ save_on_train_epoch_end: false
│ rich_model_summary:
│ _target_: pytorch_lightning.callbacks.RichModelSummary
│ max_depth: 1
│ log_grad_norm:
│ _target_: src.core.callbacks.LogGradNorm
│ norm_type: 2
│ group_separator: /
│ only_total: true
│ on_step: true
│ on_epoch: false
│ prog_bar: true
│ log_generated_text:
│ _target_: src.core.callbacks.GenerateAndLogText
│ dirpath: ./generated_text
│ type: generated_text
│ pop_keys_after_logging: true
│ on_train: false
│ on_validation: false
│ on_test: true
│ log_to_wandb: true
│ wandb_log_dataset_sizes:
│ _target_: src.core.callbacks.WandbLogDatasetSizes
│ logger:
│ wandb:
│ _target_: pytorch_lightning.loggers.WandbLogger
│ project: nli_debiasing
│ entity: team_brushino
│ name: nli_generator_mnli
│ save_dir: ./
│ offline: false
│ log_model: false
│ group: mnli
│ job_type: generator
│ tags:
│ - nli_generator_mnli
│ - seed=42
│ - seed_dataloader=42
│ notes: nli_generator_mnli_time=02-24-53
│ enable_checkpointing: true
│ enable_progress_bar: true
│ enable_model_summary: true
│ gradient_clip_val: 0.0
│ gradient_clip_algorithm: null
│ accelerator: gpu
│ devices: auto
│ gpus: null
│ auto_select_gpus: true
│ accumulate_grad_batches: 1
│ max_epochs: 3
│ min_epochs: 1
│ max_steps: -1
│ min_steps: null
│ max_time: null
│ num_sanity_val_steps: 2
│ overfit_batches: 0.0
│ fast_dev_run: false
│ limit_train_batches: 1.0
│ limit_val_batches: 1.0
│ limit_test_batches: 1.0
│ profiler: null
│ detect_anomaly: false
│ deterministic: false
│ check_val_every_n_epoch: 1
│ val_check_interval: 0.1
│ log_every_n_steps: 10
│ move_metrics_to_cpu: false
│
└── training
└── run_val_before_fit: false
run_val_after_fit: false
run_test_before_fit: false
run_test_after_fit: true
lr: 0.001
seed: 42
show_batch: false
batch_size: 64
eval_batch_size: 100
num_workers: 16
pin_memory: true
drop_last: false
persistent_workers: false
shuffle: true
seed_dataloader: 42
ignore_warnings: true
experiment_name: nli_generator_mnli
``` |
laituan245/molt5-small-caption2smiles | 5c4d1d5b1d819185a8e43d77cec8e9ebf2ae5853 | 2022-05-03T18:08:09.000Z | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:2204.11817",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | laituan245 | null | laituan245/molt5-small-caption2smiles | 7 | null | transformers | 14,387 | ---
license: apache-2.0
---
This model can be used to generate a SMILES string from an input caption.
## Example Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("laituan245/molt5-small-caption2smiles", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-small-caption2smiles')
input_text = 'The molecule is a monomethoxybenzene that is 2-methoxyphenol substituted by a hydroxymethyl group at position 4. It has a role as a plant metabolite. It is a member of guaiacols and a member of benzyl alcohols.'
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids, num_beams=5, max_length=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# The model will generate "COC1=C(C=CC(=C1)CCCO)O". The ground-truth is "COC1=C(C=CC(=C1)CO)O".
```
## Paper
For more information, please take a look at our paper.
Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817)
Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
|
praf-choub/bart-mofe-rl-xsum | 9d07a8534f28cd87b72cc7d303786f658a986dde | 2022-06-14T04:52:41.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:xsum",
"arxiv:2110.07166",
"transformers",
"summarization",
"license:bsd-3-clause",
"autotrain_compatible"
]
| summarization | false | praf-choub | null | praf-choub/bart-mofe-rl-xsum | 7 | null | transformers | 14,388 | ---
language: en
tags:
- summarization
license: bsd-3-clause
datasets:
- xsum
---
Citation
```
@article{DBLP:journals/corr/abs-2110-07166,
author = {Prafulla Kumar Choubey and
Jesse Vig and
Wenhao Liu and
Nazneen Fatema Rajani},
title = {MoFE: Mixture of Factual Experts for Controlling Hallucinations in
Abstractive Summarization},
journal = {CoRR},
volume = {abs/2110.07166},
year = {2021},
url = {https://arxiv.org/abs/2110.07166},
eprinttype = {arXiv},
eprint = {2110.07166},
timestamp = {Fri, 22 Oct 2021 13:33:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2110-07166.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
mrm8488/data2vec-text-base-finetuned-sst2 | 56b5cba6cf53835f194601f285e874875cc76419 | 2022-05-03T21:52:23.000Z | [
"pytorch",
"tensorboard",
"data2vec-text",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | mrm8488 | null | mrm8488/data2vec-text-base-finetuned-sst2 | 7 | null | transformers | 14,389 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: data2vec-text-base-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9231651376146789
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data2vec-text-base-finetuned-sst2
This model is a fine-tuned version of [facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3600
- Accuracy: 0.9232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.1519343408010398e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2865 | 1.0 | 4210 | 0.2662 | 0.9128 |
| 0.2256 | 2.0 | 8420 | 0.3698 | 0.9002 |
| 0.1676 | 3.0 | 12630 | 0.3107 | 0.9186 |
| 0.1481 | 4.0 | 16840 | 0.3425 | 0.9186 |
| 0.1429 | 5.0 | 21050 | 0.3600 | 0.9232 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
chrisvinsen/xlsr-wav2vec2-base-commonvoice-demo-colab-4 | faf75890ac2d670314715047e7dc7a73a837814d | 2022-05-04T00:35:19.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/xlsr-wav2vec2-base-commonvoice-demo-colab-4 | 7 | null | transformers | 14,390 | Entry not found |
eastmountaincode/generate | 3f37fe0bd39e6e5a46528ea9ed5e786468b20519 | 2022-05-03T21:03:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | eastmountaincode | null | eastmountaincode/generate | 7 | null | transformers | 14,391 | Entry not found |
Lauler/sentiment-classifier | 8430d75ec3fe42eeaed6b7918b7c1b87a2a1a693 | 2022-05-03T23:28:00.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Lauler | null | Lauler/sentiment-classifier | 7 | null | transformers | 14,392 | ## Sentiment classifier
Sentiment classifier for Swedish trained on ScandiSent dataset. |
ml4pubmed/BioM-BERT-PubMed-PMC-Large_pub_section | 5b4ca9ab5528d34cbaffbc5b693f9f99782d4068 | 2022-05-04T00:50:46.000Z | [
"pytorch",
"electra",
"text-classification",
"en",
"dataset:pubmed",
"transformers"
]
| text-classification | false | ml4pubmed | null | ml4pubmed/BioM-BERT-PubMed-PMC-Large_pub_section | 7 | null | transformers | 14,393 | ---
language:
- en
datasets:
- pubmed
metrics:
- f1
pipeline_tag: text-classification
widget:
- text: "Many pathogenic processes and diseases are the result of an erroneous activation of the complement cascade and a number of inhibitors of complement have thus been examined for anti-inflammatory actions."
example_title: "BACKGROUND example"
- text: "A total of 192 MI patients and 140 control persons were included."
example_title: "METHODS example"
- text: "MI patients had 18 % higher plasma levels of MAp44 (IQR 11-25 %) as compared to the healthy control group (p < 0. 001.)"
example_title: "RESULTS example"
- text: "The finding that a brief CB group intervention delivered by real-world providers significantly reduced MDD onset relative to both brochure control and bibliotherapy is very encouraging, although effects on continuous outcome measures were small or nonsignificant and approximately half the magnitude of those found in efficacy research, potentially because the present sample reported lower initial depression."
example_title: "CONCLUSIONS example"
- text: "In order to understand and update the prevalence of myopia in Taiwan, a nationwide survey was performed in 1995."
example_title: "OBJECTIVE example"
---
# BioM-BERT-PubMed-PMC-Large_pub_section
- original model file name: textclassifer_BioM-BERT-PubMed-PMC-Large_pubmed_20k
- This is a fine-tuned checkpoint of `sultan/BioM-BERT-PubMed-PMC-Large` for document section text classification
- possible document section classes are:BACKGROUND, CONCLUSIONS, METHODS, OBJECTIVE, RESULTS,
## metadata
### training_metrics
- date_run: Apr-23-2022_t-04
- huggingface_tag: sultan/BioM-BERT-PubMed-PMC-Large
### training_parameters
- date_run: Apr-23-2022_t-04
- huggingface_tag: sultan/BioM-BERT-PubMed-PMC-Large
|
IsekaiMeta/dapprf4 | 59dc342176101e0194c043f8dfec8ced902a6413 | 2022-05-04T02:53:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | IsekaiMeta | null | IsekaiMeta/dapprf4 | 7 | null | transformers | 14,394 | ---
tags:
- conversational
---
#dapprf4 |
learningdude/wav2vec2-base-sound2 | f86577c2acc89b4a19caa010f2d2a0ab0fcd88a7 | 2022-05-05T04:34:26.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| audio-classification | false | learningdude | null | learningdude/wav2vec2-base-sound2 | 7 | null | transformers | 14,395 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-sound2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-sound2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5012
- Accuracy: 0.5357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 2.0762 | 0.0714 |
| No log | 2.0 | 2 | 2.0638 | 0.1429 |
| No log | 3.0 | 3 | 2.0387 | 0.2143 |
| No log | 4.0 | 4 | 2.0124 | 0.2143 |
| No log | 5.0 | 5 | 1.9864 | 0.2143 |
| No log | 6.0 | 6 | 1.9609 | 0.2143 |
| No log | 7.0 | 7 | 1.9235 | 0.2143 |
| No log | 8.0 | 8 | 1.9379 | 0.2143 |
| No log | 9.0 | 9 | 1.8627 | 0.2857 |
| 1.9713 | 10.0 | 10 | 1.8277 | 0.3214 |
| 1.9713 | 11.0 | 11 | 1.7765 | 0.3571 |
| 1.9713 | 12.0 | 12 | 1.7204 | 0.5 |
| 1.9713 | 13.0 | 13 | 1.6956 | 0.5 |
| 1.9713 | 14.0 | 14 | 1.6602 | 0.5357 |
| 1.9713 | 15.0 | 15 | 1.6277 | 0.5714 |
| 1.9713 | 16.0 | 16 | 1.6053 | 0.5 |
| 1.9713 | 17.0 | 17 | 1.5825 | 0.5 |
| 1.9713 | 18.0 | 18 | 1.5656 | 0.4286 |
| 1.9713 | 19.0 | 19 | 1.5616 | 0.4643 |
| 1.6334 | 20.0 | 20 | 1.5613 | 0.4286 |
| 1.6334 | 21.0 | 21 | 1.5419 | 0.5 |
| 1.6334 | 22.0 | 22 | 1.5166 | 0.5357 |
| 1.6334 | 23.0 | 23 | 1.5088 | 0.5 |
| 1.6334 | 24.0 | 24 | 1.5052 | 0.5 |
| 1.6334 | 25.0 | 25 | 1.5012 | 0.5357 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 1.14.0
- Tokenizers 0.12.1
|
eastmountaincode/duneGenerationNoUser | 4dec2e4acd76e80845b0f656ab09e61625f7a923 | 2022-05-05T20:42:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | eastmountaincode | null | eastmountaincode/duneGenerationNoUser | 7 | null | transformers | 14,396 | Entry not found |
xingqiang/macbert-zh-address-match-finetuned | 6e19b5ee0da301ef3b37ecb5b32f5873430c2087 | 2022-05-06T08:48:00.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | xingqiang | null | xingqiang/macbert-zh-address-match-finetuned | 7 | null | transformers | 14,397 | Entry not found |
DioLiu/distilbert-base-uncased-finetuned-sst2-with-unfamiliar-words | 33a9dbc7c1595f282267e4b401cd16253331d7ec | 2022-05-06T07:34:54.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | DioLiu | null | DioLiu/distilbert-base-uncased-finetuned-sst2-with-unfamiliar-words | 7 | null | transformers | 14,398 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2-with-unfamiliar-words
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2-with-unfamiliar-words
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0870
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2917 | 1.0 | 975 | 0.0703 | 0.9778 |
| 0.063 | 2.0 | 1950 | 0.0815 | 0.9821 |
| 0.0233 | 3.0 | 2925 | 0.0680 | 0.9866 |
| 0.0134 | 4.0 | 3900 | 0.0817 | 0.9866 |
| 0.0054 | 5.0 | 4875 | 0.0870 | 0.9866 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
EAST/autotrain-maysix-828926405 | 78db205d25e32617202947fd8827a0111b39f1a1 | 2022-05-06T07:13:15.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"dataset:EAST/autotrain-data-maysix",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | EAST | null | EAST/autotrain-maysix-828926405 | 7 | null | transformers | 14,399 | ---
tags: autotrain
language: zh
widget:
- text: "I love AutoTrain 🤗"
datasets:
- EAST/autotrain-data-maysix
co2_eq_emissions: 0.00258669198292644
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 828926405
- CO2 Emissions (in grams): 0.00258669198292644
## Validation Metrics
- Loss: 0.1797131597995758
- Accuracy: 0.9318181818181818
- Precision: 0.9047619047619048
- Recall: 0.95
- AUC: 0.9875
- F1: 0.9268292682926829
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/EAST/autotrain-maysix-828926405
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("EAST/autotrain-maysix-828926405", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("EAST/autotrain-maysix-828926405", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.