modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
RaghuramKol/distilbert-base-uncased-finetuned-emotion | 1f124f372ea0c9d60f816da702877a2c2e4ba209 | 2022-03-15T19:56:43.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | RaghuramKol | null | RaghuramKol/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,700 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9271888946173477
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2218
- Accuracy: 0.927
- F1: 0.9272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8487 | 1.0 | 250 | 0.3274 | 0.906 | 0.9030 |
| 0.2595 | 2.0 | 500 | 0.2218 | 0.927 | 0.9272 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
mikeadimech/bart-large-cnn-qmsum-meeting-summarization | 989963e829b7f1e76bec83205a0a1d7f588c80e1 | 2022-03-18T19:00:43.000Z | [
"pytorch",
"bart",
"text2text-generation",
"dataset:yawnick/QMSum",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | mikeadimech | null | mikeadimech/bart-large-cnn-qmsum-meeting-summarization | 12 | null | transformers | 10,701 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-qmsum-meeting-summarization
results: []
datasets:
- yawnick/QMSum
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-qmsum-meeting-summarization
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7578
- Rouge1: 37.9431
- Rouge2: 10.6366
- Rougel: 25.5782
- Rougelsum: 33.0209
- Gen Len: 72.7714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 500
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
cb2-kai/finetuning-sentiment-model-3000-samples | 978f74804799a8a02dcbfc113279eb9a709edcd9 | 2022-03-21T18:34:27.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | cb2-kai | null | cb2-kai/finetuning-sentiment-model-3000-samples | 12 | null | transformers | 10,702 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.86
- name: F1
type: f1
value: 0.8679245283018867
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3568
- Accuracy: 0.86
- F1: 0.8679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Ameer05/distilbart-cnn-12-6-finetuned-resume-summarizer | 0236fc2c55ae96171fe407186bba2038ea4e9914 | 2022-03-21T19:35:06.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| summarization | false | Ameer05 | null | Ameer05/distilbart-cnn-12-6-finetuned-resume-summarizer | 12 | null | transformers | 10,703 | ---
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-12-6-finetuned-resume-summarizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-finetuned-resume-summarizer
This model is a fine-tuned version of [Ameer05/model-tokenizer-repo](https://huggingface.co/Ameer05/model-tokenizer-repo) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1123
- Rouge1: 52.5826
- Rouge2: 34.3861
- Rougel: 41.8525
- Rougelsum: 51.0015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 0.91 | 5 | 3.2243 | 42.8593 | 24.8652 | 34.1789 | 41.406 |
| No log | 1.91 | 10 | 2.6948 | 48.8571 | 28.6711 | 39.2648 | 46.188 |
| No log | 2.91 | 15 | 2.4665 | 50.6085 | 30.4034 | 39.7406 | 48.5449 |
| No log | 3.91 | 20 | 2.3329 | 52.2357 | 32.3398 | 41.574 | 49.4316 |
| 3.6611 | 4.91 | 25 | 2.2362 | 52.0134 | 33.1612 | 41.3103 | 50.255 |
| 3.6611 | 5.91 | 30 | 2.1833 | 51.5434 | 32.7045 | 40.5683 | 49.4238 |
| 3.6611 | 6.91 | 35 | 2.1462 | 53.5144 | 35.4518 | 42.8615 | 51.4053 |
| 3.6611 | 7.91 | 40 | 2.1518 | 52.0985 | 33.6754 | 41.5936 | 50.5159 |
| 2.0326 | 8.91 | 45 | 2.1075 | 53.1401 | 34.9721 | 42.2973 | 51.8454 |
| 2.0326 | 9.91 | 50 | 2.1123 | 52.5826 | 34.3861 | 41.8525 | 51.0015 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
asahi417/tner-roberta-large-tweet-2020 | 9f2d61fc46ffb48b627f79a536cdb70631a6b09f | 2022-05-06T11:17:35.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | asahi417 | null | asahi417/tner-roberta-large-tweet-2020 | 12 | null | transformers | 10,704 | Entry not found |
gayanin/t5-small-med-term-conditional-masking | f3dbc58d0e6311392d8b5a17dbcfe176bff97c50 | 2022-03-24T14:54:49.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | gayanin | null | gayanin/t5-small-med-term-conditional-masking | 12 | null | transformers | 10,705 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-med-term-conditional-masking
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-med-term-conditional-masking
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6808
- Rouge2 Precision: 0.6855
- Rouge2 Recall: 0.486
- Rouge2 Fmeasure: 0.5507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:------:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.9303 | 1.0 | 15827 | 0.8262 | 0.6603 | 0.4698 | 0.5318 |
| 0.8677 | 2.0 | 31654 | 0.7679 | 0.6695 | 0.4762 | 0.539 |
| 0.8315 | 3.0 | 47481 | 0.7393 | 0.6741 | 0.4783 | 0.5418 |
| 0.7999 | 4.0 | 63308 | 0.7194 | 0.6774 | 0.4811 | 0.5448 |
| 0.7746 | 5.0 | 79135 | 0.7059 | 0.6804 | 0.4815 | 0.5459 |
| 0.7785 | 6.0 | 94962 | 0.6958 | 0.6827 | 0.4841 | 0.5485 |
| 0.7592 | 7.0 | 110789 | 0.6893 | 0.6841 | 0.4849 | 0.5494 |
| 0.745 | 8.0 | 126616 | 0.6849 | 0.6846 | 0.4852 | 0.5498 |
| 0.7443 | 9.0 | 142443 | 0.6818 | 0.6854 | 0.4865 | 0.551 |
| 0.7417 | 10.0 | 158270 | 0.6808 | 0.6855 | 0.486 | 0.5507 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Helsinki-NLP/opus-mt-tc-big-zle-de | 1cfb0609e012e563bd0778d589ef1b68de59456f | 2022-06-01T13:09:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"be",
"de",
"ru",
"uk",
"zle",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-zle-de | 12 | null | transformers | 10,706 | ---
language:
- be
- de
- ru
- uk
- zle
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-zle-de
results:
- task:
name: Translation rus-deu
type: translation
args: rus-deu
dataset:
name: flores101-devtest
type: flores_101
args: rus deu devtest
metrics:
- name: BLEU
type: bleu
value: 26.1
- task:
name: Translation ukr-deu
type: translation
args: ukr-deu
dataset:
name: flores101-devtest
type: flores_101
args: ukr deu devtest
metrics:
- name: BLEU
type: bleu
value: 28.1
- task:
name: Translation bel-deu
type: translation
args: bel-deu
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: bel-deu
metrics:
- name: BLEU
type: bleu
value: 44.8
- task:
name: Translation rus-deu
type: translation
args: rus-deu
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-deu
metrics:
- name: BLEU
type: bleu
value: 51.8
- task:
name: Translation ukr-deu
type: translation
args: ukr-deu
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ukr-deu
metrics:
- name: BLEU
type: bleu
value: 54.7
- task:
name: Translation rus-deu
type: translation
args: rus-deu
dataset:
name: newstest2013
type: wmt-2013-news
args: rus-deu
metrics:
- name: BLEU
type: bleu
value: 25.2
---
# opus-mt-tc-big-zle-de
Neural machine translation model for translating from East Slavic languages (zle) to German (de).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-19
* source language(s): bel rus ukr
* target language(s): deu
* model: transformer-big
* data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807_transformer-big_2022-03-19.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-deu/opusTCv20210807_transformer-big_2022-03-19.zip)
* more information released models: [OPUS-MT zle-deu README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-deu/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Это был по-настоящему прекрасный день.",
"Дождь кончился?"
]
model_name = "pytorch-models/opus-mt-tc-big-zle-de"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Es war ein wirklich schöner Tag.
# Ist der Regen vorbei?
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-de")
print(pipe("Это был по-настоящему прекрасный день."))
# expected output: Es war ein wirklich schöner Tag.
```
## Benchmarks
* test set translations: [opusTCv20210807_transformer-big_2022-03-19.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-deu/opusTCv20210807_transformer-big_2022-03-19.test.txt)
* test set scores: [opusTCv20210807_transformer-big_2022-03-19.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-deu/opusTCv20210807_transformer-big_2022-03-19.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| bel-deu | tatoeba-test-v2021-08-07 | 0.63720 | 44.8 | 551 | 4182 |
| rus-deu | tatoeba-test-v2021-08-07 | 0.69768 | 51.8 | 12800 | 98842 |
| ukr-deu | tatoeba-test-v2021-08-07 | 0.70860 | 54.7 | 10319 | 64646 |
| bel-deu | flores101-devtest | 0.47052 | 12.9 | 1012 | 25094 |
| rus-deu | flores101-devtest | 0.56159 | 26.1 | 1012 | 25094 |
| ukr-deu | flores101-devtest | 0.57251 | 28.1 | 1012 | 25094 |
| rus-deu | newstest2012 | 0.49257 | 19.8 | 3003 | 72886 |
| rus-deu | newstest2013 | 0.54015 | 25.2 | 3000 | 63737 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Wed Mar 23 22:16:45 EET 2022
* port machine: LM0-400-22516.local
|
agdsga/chinese-roberta-wwm-ext-large | 4517ed210722c3f6594f54d7ee096a94e8461e82 | 2022-03-25T03:05:07.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | agdsga | null | agdsga/chinese-roberta-wwm-ext-large | 12 | null | transformers | 10,707 | Entry not found |
TeamFnord/manga-ocr | 1d0bb748d3b7551b2c556f406157459949ad32bc | 2022-02-10T07:50:15.000Z | [
"pytorch",
"vision-encoder-decoder",
"ja",
"dataset:manga109s",
"transformers",
"image-to-text",
"license:apache-2.0"
]
| image-to-text | false | TeamFnord | null | TeamFnord/manga-ocr | 12 | null | transformers | 10,708 | ---
language: ja
tags:
- image-to-text
license: apache-2.0
datasets:
- manga109s
---
# Manga OCR
Optical character recognition for Japanese text, with the main focus being Japanese manga.
It uses [Vision Encoder Decoder](https://huggingface.co/docs/transformers/model_doc/visionencoderdecoder) framework.
Manga OCR can be used as a general purpose printed Japanese OCR, but its main goal was to provide a high quality
text recognition, robust against various scenarios specific to manga:
- both vertical and horizontal text
- text with furigana
- text overlaid on images
- wide variety of fonts and font styles
- low quality images
Code is available [here](https://github.com/kha-white/manga_ocr).
|
DMetaSoul/sbert-chinese-general-v1 | a3bebbf20c355066c73ad1cb05f5342d254be9e2 | 2022-04-04T07:22:58.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"semantic-search",
"chinese"
]
| sentence-similarity | false | DMetaSoul | null | DMetaSoul/sbert-chinese-general-v1 | 12 | null | sentence-transformers | 10,709 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- semantic-search
- chinese
---
# DMetaSoul/sbert-chinese-general-v1
此模型基于 [bert-base-chinese](https://huggingface.co/bert-base-chinese) 版本 BERT 模型,在 NLI、PAWS-X、PKU-Paraphrase-Bank、STS 等语义相似数据集上进行训练,适用于**通用语义匹配**场景(此模型在 Chinese-STS 任务上效果较好,但在其它任务上效果并非最优,存在一定过拟合风险),比如文本特征抽取、文本向量聚类、文本语义搜索等业务场景。
注:此模型的[轻量化版本](https://huggingface.co/DMetaSoul/sbert-chinese-general-v1-distill),也已经开源啦!
# Usage
## 1. Sentence-Transformers
通过 [sentence-transformers](https://www.SBERT.net) 框架来使用该模型,首先进行安装:
```
pip install -U sentence-transformers
```
然后使用下面的代码来载入该模型并进行文本表征向量的提取:
```python
from sentence_transformers import SentenceTransformer
sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"]
model = SentenceTransformer('DMetaSoul/sbert-chinese-general-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## 2. HuggingFace Transformers
如果不想使用 [sentence-transformers](https://www.SBERT.net) 的话,也可以通过 HuggingFace Transformers 来载入该模型并进行文本向量抽取:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('DMetaSoul/sbert-chinese-general-v1')
model = AutoModel.from_pretrained('DMetaSoul/sbert-chinese-general-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation
该模型在公开的几个语义匹配数据集上进行了评测,计算了向量相似度跟真实标签之间的相关性系数:
| | **csts_dev** | **csts_test** | **afqmc** | **lcqmc** | **bqcorpus** | **pawsx** | **xiaobu** |
| ------------ | ------------ | ------------- | --------- | --------- | ------------ | --------- | ---------- |
| **spearman** | 84.54% | 82.17% | 23.80% | 65.94% | 45.52% | 11.52% | 48.51% |
## Citing & Authors
E-mail: [email protected] |
DMetaSoul/sbert-chinese-qmc-domain-v1 | 25a28159ba2986912df1f5553c0d7b50202f9530 | 2022-04-04T07:24:17.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"semantic-search",
"chinese"
]
| sentence-similarity | false | DMetaSoul | null | DMetaSoul/sbert-chinese-qmc-domain-v1 | 12 | null | sentence-transformers | 10,710 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- semantic-search
- chinese
---
# DMetaSoul/sbert-chinese-qmc-domain-v1
此模型基于 [bert-base-chinese](https://huggingface.co/bert-base-chinese) 版本 BERT 模型,在百度知道问题匹配数据集([LCQMC](http://icrc.hitsz.edu.cn/Article/show/171.html))上进行训练调优,适用于**开放领域的问题匹配**场景,比如:
- 洗澡用什么香皂好?vs. 洗澡用什么香皂好
- 大连哪里拍婚纱照好点? vs. 大连哪里拍婚纱照比较好
- 银行卡怎样挂失?vs. 银行卡丢了怎么挂失啊?
注:此模型的[轻量化版本](https://huggingface.co/DMetaSoul/sbert-chinese-qmc-domain-v1-distill),也已经开源啦!
# Usage
## 1. Sentence-Transformers
通过 [sentence-transformers](https://www.SBERT.net) 框架来使用该模型,首先进行安装:
```
pip install -U sentence-transformers
```
然后使用下面的代码来载入该模型并进行文本表征向量的提取:
```python
from sentence_transformers import SentenceTransformer
sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"]
model = SentenceTransformer('DMetaSoul/sbert-chinese-qmc-domain-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## 2. HuggingFace Transformers
如果不想使用 [sentence-transformers](https://www.SBERT.net) 的话,也可以通过 HuggingFace Transformers 来载入该模型并进行文本向量抽取:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('DMetaSoul/sbert-chinese-qmc-domain-v1')
model = AutoModel.from_pretrained('DMetaSoul/sbert-chinese-qmc-domain-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation
该模型在公开的几个语义匹配数据集上进行了评测,计算了向量相似度跟真实标签之间的相关性系数:
| | **csts_dev** | **csts_test** | **afqmc** | **lcqmc** | **bqcorpus** | **pawsx** | **xiaobu** |
| ------------------------------- | ------------ | ------------- | --------- | --------- | ------------ | --------- | ---------- |
| **sbert-chinese-qmc-domain-v1** | 80.90% | 76.63% | 34.51% | 77.06% | 52.96% | 12.98% | 59.48% |
## Citing & Authors
E-mail: [email protected] |
hackathon-pln-es/jurisbert-tsdae-sentence-transformer | 6354a1034e0e83573469da0c22da5d6e422a6450 | 2022-03-30T16:47:04.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"es",
"dataset:scjnugacj/scjn_dataset_corpus_tesis",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | hackathon-pln-es | null | hackathon-pln-es/jurisbert-tsdae-sentence-transformer | 12 | 3 | sentence-transformers | 10,711 | ---
widget:
- text: "interés superior del menor"
- text: "interés superior del infante"
- text: "interés superior de la niñez"
pipeline_tag: sentence-similarity
language: es
datasets: scjnugacj/scjn_dataset_corpus_tesis
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# jurisbert-tsdae-sentence-transformer
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ['interés superior del menor', 'interés superior del infante']
model = SentenceTransformer('hackaton-pnl-es/jurisbert-tsdae-sentence-transformer')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['interés superior del menor', 'interés superior del infante']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hackaton-pnl-es/jurisbert-tsdae-sentence-transformer')
model = AutoModel.from_pretrained('hackaton-pnl-es/jurisbert-tsdae-sentence-transformer')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 25000 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 3e-05
},
"scheduler": "constantlr",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Equipo
El equipo esta conformado por @gpalomeque @aurelipvs @cecilimacias @giomadariaga @cattsytabla |
nikhedward/t5-small-finetuned-multi-news | a278da69a13f159e20323b140ce12c3d5b06b806 | 2022-03-26T04:31:49.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:multi_news",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | nikhedward | null | nikhedward/t5-small-finetuned-multi-news | 12 | null | transformers | 10,712 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- multi_news
metrics:
- rouge
model-index:
- name: t5-small-finetuned-multi-news
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: multi_news
type: multi_news
args: default
metrics:
- name: Rouge1
type: rouge
value: 14.5549
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-multi-news
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the multi_news dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7775
- Rouge1: 14.5549
- Rouge2: 4.5934
- Rougel: 11.1178
- Rougelsum: 12.8964
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 3.0211 | 1.0 | 1405 | 2.7775 | 14.5549 | 4.5934 | 11.1178 | 12.8964 | 19.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
avb/bert-base-uncased-finetuned-cola | ec6845f0c0f49023d4e77c47cb0a8fc1e8a3b08a | 2022-04-05T22:52:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | avb | null | avb/bert-base-uncased-finetuned-cola | 12 | null | transformers | 10,713 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5642446874338215
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8297
- Matthews Correlation: 0.5642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4869 | 1.0 | 535 | 0.5115 | 0.5134 |
| 0.2872 | 2.0 | 1070 | 0.5523 | 0.5399 |
| 0.1836 | 3.0 | 1605 | 0.7024 | 0.5619 |
| 0.1249 | 4.0 | 2140 | 0.8297 | 0.5642 |
| 0.0908 | 5.0 | 2675 | 0.9284 | 0.5508 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
rahulacj/bertweet-base-finetuned-sentiment-analysis | 3fb8a77a51fbf049f42fbb2f5533dbd113d413ad | 2022-03-31T16:21:16.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | rahulacj | null | rahulacj/bertweet-base-finetuned-sentiment-analysis | 12 | null | transformers | 10,714 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bertweet-base-finetuned-sentiment-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-base-finetuned-sentiment-analysis
This model is a fine-tuned version of [cardiffnlp/bertweet-base-sentiment](https://huggingface.co/cardiffnlp/bertweet-base-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8458
- Accuracy: 0.6426
- F1: 0.6397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8904 | 1.0 | 630 | 0.8509 | 0.6381 | 0.6340 |
| 0.7655 | 2.0 | 1260 | 0.8345 | 0.6579 | 0.6559 |
| 0.66 | 3.0 | 1890 | 0.9199 | 0.6548 | 0.6514 |
| 0.447 | 4.0 | 2520 | 1.0324 | 0.6429 | 0.6417 |
| 0.3585 | 5.0 | 3150 | 1.1234 | 0.6452 | 0.6424 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.0
|
JNK789/distilbert-base-uncased-finetuned-emotion | a32fb3f537e2b5d71c08dec1d32e15a9f046bbff | 2022-04-01T17:30:59.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | JNK789 | null | JNK789/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,715 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9305
- name: F1
type: f1
value: 0.9307950942842982
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1712
- Accuracy: 0.9305
- F1: 0.9308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7721 | 1.0 | 250 | 0.2778 | 0.9145 | 0.9131 |
| 0.2103 | 2.0 | 500 | 0.1818 | 0.925 | 0.9249 |
| 0.1446 | 3.0 | 750 | 0.1712 | 0.9305 | 0.9308 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
hackathon-pln-es/roberta-base-bne-squad2-es | fa89a2130f209e946c6dc4ebef9a7f3ff9097cbd | 2022-04-02T03:46:40.000Z | [
"pytorch",
"roberta",
"question-answering",
"es",
"dataset:squad_es",
"transformers",
"autotrain_compatible"
]
| question-answering | false | hackathon-pln-es | null | hackathon-pln-es/roberta-base-bne-squad2-es | 12 | null | transformers | 10,716 | ---
language: es
datasets:
- squad_es
---
# roberta-base es for QA
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the [squad_es(v2)](https://huggingface.co/datasets/squad_es) training dataset.
## Hyperparameters
The hyperparameters were chosen based on those used in [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2), an english-based model trained for similar purposes
```
--num_train_epochs 2
--learning_rate 3e-5
--max_seq_length 386
--doc_stride 128
```
## Performance
Evaluated on the [squad_es(v2)](https://huggingface.co/datasets/squad_es) dev set.
```
eval_exact": 62.13526733007252,
eval_f1": 69.38515019522332,
eval_HasAns_exact": 53.07017543859649,
eval_HasAns_f1": 67.57238714827123,
eval_HasAns_total": 5928,
eval_NoAns_exact": 71.19730185497471,
eval_NoAns_f1": 71.19730185497471,
eval_NoAns_total": 5930,
```
## Team
Santiago Maximo: [smaximo](https://huggingface.co/smaximo) |
Denzil/distilbert-base-uncased-finetuned-emotion | 7282904b942a2f42e38ae22c68972150dc114c72 | 2022-04-02T14:27:32.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Denzil | null | Denzil/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,717 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9239207626877816
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2169
- Accuracy: 0.924
- F1: 0.9239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8101 | 1.0 | 250 | 0.3068 | 0.905 | 0.9019 |
| 0.2456 | 2.0 | 500 | 0.2169 | 0.924 | 0.9239 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
facebook/data2vec-audio-large-100h | b76675f9baf73c95727a01ac3fb53e4cdc53b9e3 | 2022-04-18T16:24:44.000Z | [
"pytorch",
"data2vec-audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2202.03555",
"transformers",
"speech",
"license:apache-2.0"
]
| automatic-speech-recognition | false | facebook | null | facebook/data2vec-audio-large-100h | 12 | null | transformers | 10,718 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# Data2Vec-Audio-Large-100h
[Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/)
The large model pretrained and fine-tuned on 100 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2202.03555)
Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli
**Abstract**
While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.
The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec .
# Pre-Training method

For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Data2VecForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-large-100h")
model = Data2VecForCTC.from_pretrained("facebook/data2vec-audio-large-100h")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"],, return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
|
LIA-AvignonUniversity/IWSLT2022-tamasheq-only | 4794ce98aaf3e745e659420a6da5841bf68d88ed | 2022-05-11T09:32:21.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"arxiv:2201.05051",
"transformers"
]
| null | false | LIA-AvignonUniversity | null | LIA-AvignonUniversity/IWSLT2022-tamasheq-only | 12 | null | transformers | 10,719 | ## Model and data descriptions
This is a wav2vec 2.0 base model trained on 243 hours of Tamasheq speech from the corpus presented in [Boito et al., 2022](https://arxiv.org/abs/2201.05051).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations.
## Referencing our IWSLT models
```
@article{boito2022trac,
title={ON-TRAC Consortium Systems for the IWSLT 2022 Dialect and Low-resource Speech Translation Tasks},
author={Boito, Marcely Zanon and Ortega, John and Riguidel, Hugo and Laurent, Antoine and Barrault, Lo{\"\i}c and Bougares, Fethi and Chaabani, Firas and Nguyen, Ha and Barbier, Florentin and Gahbiche, Souhir and others},
journal={IWSLT},
year={2022}
}
``` |
nielsr/segformer-finetuned-sidewalk | 202fb6869965dc04c859449f942acc01a9691a8a | 2022-04-06T13:38:20.000Z | [
"pytorch",
"segformer",
"dataset:segments/sidewalk-semantic",
"transformers",
"vision",
"image-segmentation",
"license:apache-2.0"
]
| image-segmentation | false | nielsr | null | nielsr/segformer-finetuned-sidewalk | 12 | null | transformers | 10,720 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
datasets:
- segments/sidewalk-semantic
widget:
- src: https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg
example_title: Brugge
---
# Segformer-b0, fine-tuned on Sidewalk
This repository contains the weights of a `SegFormerForSemanticSegmentation` model.
It was trained using the example script. |
GioReg/notiBERTo | 024dce56175259f6734194dd063ab4217c062e43 | 2022-06-09T17:08:29.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | GioReg | null | GioReg/notiBERTo | 12 | null | transformers | 10,721 | language:
- it
Si è creato un modello, chiamato notiBERTo, svolgendo la fase di addestramento e utilizzando per la creazione e il tuning dei pesi del modello l’algoritmo non supervisionato di masked-language modeling (MLM); questo non richiede l’utilizzo di testo con etichettatura. L’idea e stata quella di ottenere un modello BERT-based per la lingua italiana focalizzato sul linguaggio tipico utilizzato nei contesti dell’informazione giornalistica online che quindi potesse ricalcare lo stile, il lessico della stampa.
Per i dati in input sono stati utilizzati database disponibili pubblicamente online organizzati dal portale “Wortschatz Leipzig” dell’universita di Lipsia. Il portale offre l’accesso ai “corpora collection Leipzig” dove si trovano 900 collezioni testuali divise per lingua - le lingue presenti sono 250 - e argomento, ottenuti principalmente attraverso data crawling dei siti internet. In particolare sono stati scelti database di collezioni di notizie ottenute attraverso feeds RSS rac colte su base giornaliera e database ottenuti attraverso crawling dai principali siti internet di notizie italiane, suddivisi in sottodatabase in base agli anni di raccolta. Per la creazione di “notiBERTo” sono stati utilizzati database relativi agli anni 2018, 2019, 2020 per un totale di circa 700MB.
|
vocab-transformers/distilbert-tokenizer_256k-MLM_1M | 477ba8ed1a70b84a6a2703beb589a62134a3322e | 2022-04-07T20:06:32.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | vocab-transformers | null | vocab-transformers/distilbert-tokenizer_256k-MLM_1M | 12 | null | transformers | 10,722 | # DistilBERT with 256k token embeddings
This model was initialized with a word2vec token embedding matrix with 256k entries, but these token embeddings were updated during MLM. The word2vec was trained on 100GB data from C4, MSMARCO, News, Wikipedia, S2ORC, for 3 epochs.
Then the model was trained on this dataset with MLM for 1M steps (batch size 64). The token embeddings were updated during MLM.
|
jaumefib/datathon-against-racism | b2eaf2e0bc03eee89ed0d7a45f895d98405293e9 | 2022-04-09T13:56:56.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"transformers",
"license:mit"
]
| text-classification | false | jaumefib | null | jaumefib/datathon-against-racism | 12 | 1 | transformers | 10,723 | ---
license: mit
language: es
widget:
- text: "Los mejores libros de Abdulrazak Gurnah, el ganador del Nobel de Literatura."
example_title: "Non-racist example"
- text: "Ya están detenidos dos rumanos señalados de cometer fraudes bancarios."
example_title: "Racist example"
---
Model that automatically classifies text messages as Racist or not Racist.
* `LABEL_0` output indicates non-racist text
* `LABEL_1` output indicates racist text
# Data
Tweets from Benítez-Andrades et al. (2022) dataset and the Datathon Against Racism tweets dataset. |
course5i/SEAD-L-6_H-256_A-8-sst2 | c192a4180ae57623bef4471d76a469b53afe2229 | 2022-06-12T19:43:45.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"en",
"dataset:glue",
"dataset:sst2",
"arxiv:1910.01108",
"arxiv:1909.10351",
"arxiv:2002.10957",
"arxiv:1810.04805",
"arxiv:1804.07461",
"arxiv:1905.00537",
"transformers",
"SEAD",
"license:apache-2.0"
]
| text-classification | false | course5i | null | course5i/SEAD-L-6_H-256_A-8-sst2 | 12 | null | transformers | 10,724 | ---
language:
- en
license: apache-2.0
tags:
- SEAD
datasets:
- glue
- sst2
---
## Paper
## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63)
Aurthors: *Moyan Mei*, *Rohit Sroch*
## Abstract
With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks.
*Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63).
Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).*
## SEAD-L-6_H-256_A-8-sst2
This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **sst2** task. For weights initialization, we used [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased)
## All SEAD Checkpoints
Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD)
## Intended uses & limitations
More information needed
### Training hyperparameters
Please take a look at the `training_args.bin` file
```python
$ import torch
$ hyperparameters = torch.load(os.path.join('training_args.bin'))
```
### Evaluation results
| eval_accuracy | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples |
|:-------------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:|
| 0.9266 | 1.3676 | 637.636 | 20.475 | 0.2503 | 872 |
### Framework versions
- Transformers >=4.8.0
- Pytorch >=1.6.0
- TensorFlow >=2.5.0
- Flax >=0.3.5
- Datasets >=1.10.2
- Tokenizers >=0.11.6
If you use these models, please cite the following paper:
```
@article{article,
author={Mei, Moyan and Sroch, Rohit},
title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding},
volume={3},
number={1},
journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
day={26},
year={2022},
month={Feb},
url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}
}
```
|
JminJ/koElectra_base_Bad_Sentence_Classifier | 51a4437b0ed0920c0c41de4fb9e09dab50e1cdff | 2022-04-11T01:50:27.000Z | [
"pytorch",
"electra",
"text-classification",
"arxiv:2003.10555",
"transformers"
]
| text-classification | false | JminJ | null | JminJ/koElectra_base_Bad_Sentence_Classifier | 12 | null | transformers | 10,725 | # Bad_text_classifier
## Model 소개
인터넷 상에 퍼져있는 여러 댓글, 채팅이 민감한 내용인지 아닌지를 판별하는 모델을 공개합니다. 해당 모델은 공개데이터를 사용해 label을 수정하고 데이터들을 합쳐 구성해 finetuning을 진행하였습니다. 해당 모델이 언제나 모든 문장을 정확히 판단이 가능한 것은 아니라는 점 양해해 주시면 감사드리겠습니다.
```
NOTE)
공개 데이터의 저작권 문제로 인해 모델 학습에 사용된 변형된 데이터는 공개 불가능하다는 점을 밝힙니다.
또한 해당 모델의 의견은 제 의견과 무관하다는 점을 미리 밝힙니다.
```
## Dataset
### data label
* **0 : bad sentence**
* **1 : not bad sentence**
### 사용한 dataset
* [smilegate-ai/Korean Unsmile Dataset](https://github.com/smilegate-ai/korean_unsmile_dataset)
* [kocohub/Korean HateSpeech Dataset](https://github.com/kocohub/korean-hate-speech)
### dataset 가공 방법
기존 이진 분류가 아니였던 두 데이터를 이진 분류 형태로 labeling을 다시 해준 뒤, Korean HateSpeech Dataset중 label 1(not bad sentence)만을 추려 가공된 Korean Unsmile Dataset에 합쳐 주었습니다.
</br>
**Korean Unsmile Dataset에 clean으로 labeling 되어있던 데이터 중 몇개의 데이터를 0 (bad sentence)으로 수정하였습니다.**
* "~노"가 포함된 문장 중, "이기", "노무"가 포함된 데이터는 0 (bad sentence)으로 수정
* "좆", "봊" 등 성 관련 뉘앙스가 포함된 데이터는 0 (bad sentence)으로 수정
</br>
## Model Training
* huggingface transformers의 ElectraForSequenceClassification를 사용해 finetuning을 수행하였습니다.
* 한국어 공개 Electra 모델 중 3가지 모델을 사용해 각각 학습시켜주었습니다.
### use model
* [Beomi/KcELECTRA](https://github.com/Beomi/KcELECTRA)
* [monologg/koELECTRA](https://github.com/monologg/KoELECTRA)
* [tunib/electra-ko-base](https://huggingface.co/tunib/electra-ko-base)
## How to use model?
```PYTHON
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained('JminJ/koElectra_base_Bad_Sentence_Classifier')
tokenizer = AutoTokenizer.from_pretrained('JminJ/koElectra_base_Bad_Sentence_Classifier')
```
## Model Valid Accuracy
| mdoel | accuracy |
| ---------- | ---------- |
| kcElectra_base_fp16_wd_custom_dataset | 0.8849 |
| tunibElectra_base_fp16_wd_custom_dataset | 0.8726 |
| koElectra_base_fp16_wd_custom_dataset | 0.8434 |
```
Note)
모든 모델은 동일한 seed, learning_rate(3e-06), weight_decay lambda(0.001), batch_size(128)로 학습되었습니다.
```
## Contact
* [email protected]
</br></br>
## Github
* https://github.com/JminJ/Bad_text_classifier
</br></br>
## Reference
* [Beomi/KcELECTRA](https://github.com/Beomi/KcELECTRA)
* [monologg/koELECTRA](https://github.com/monologg/KoELECTRA)
* [tunib/electra-ko-base](https://huggingface.co/tunib/electra-ko-base)
* [smilegate-ai/Korean Unsmile Dataset](https://github.com/smilegate-ai/korean_unsmile_dataset)
* [kocohub/Korean HateSpeech Dataset](https://github.com/kocohub/korean-hate-speech)
* [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://arxiv.org/abs/2003.10555)
|
CellsInACell/faster_rcnn_count_cho_cells | f4cfa5022ee206a8b7a782b2393ae9c8c64e290d | 2022-04-11T10:57:01.000Z | [
"pytorch",
"resnet",
"transformers",
"object-detection"
]
| object-detection | false | CellsInACell | null | CellsInACell/faster_rcnn_count_cho_cells | 12 | null | transformers | 10,726 | ---
tags:
- object-detection
- pytorch
---
Model for counting CHO cells
|
Seethal/general_sentiment_model | fb00e5af49772a47e109c4ba952576d57663826a | 2022-04-11T17:58:16.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | Seethal | null | Seethal/general_sentiment_model | 12 | null | transformers | 10,727 | Entry not found |
lewtun/sagemaker-distilbert-emotion | e2206a20be366ded280b7365cc5518c983dfbe18 | 2022-07-03T05:14:27.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | lewtun | null | lewtun/sagemaker-distilbert-emotion | 12 | null | transformers | 10,728 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.921
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.921
verified: true
- name: Precision Macro
type: precision
value: 0.8870419502496194
verified: true
- name: Precision Micro
type: precision
value: 0.921
verified: true
- name: Precision Weighted
type: precision
value: 0.9208079974712109
verified: true
- name: Recall Macro
type: recall
value: 0.8688429370077566
verified: true
- name: Recall Micro
type: recall
value: 0.921
verified: true
- name: Recall Weighted
type: recall
value: 0.921
verified: true
- name: F1 Macro
type: f1
value: 0.87642650638535
verified: true
- name: F1 Micro
type: f1
value: 0.9209999999999999
verified: true
- name: F1 Weighted
type: f1
value: 0.9203938811554648
verified: true
- name: loss
type: loss
value: 0.23216550052165985
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2322
- Accuracy: 0.921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9306 | 1.0 | 500 | 0.2322 | 0.921 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Salesforce/codegen-6B-nl | f849d0d3e3b085afeba9e3c729836693fd69deda | 2022-06-28T17:44:34.000Z | [
"pytorch",
"codegen",
"text-generation",
"arxiv:2203.13474",
"transformers",
"license:bsd-3-clause"
]
| text-generation | false | Salesforce | null | Salesforce/codegen-6B-nl | 12 | null | transformers | 10,729 | ---
license: bsd-3-clause
---
# CodeGen (CodeGen-NL 6B)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-NL 6B** in the paper, where "NL" means it is pre-trained on the Pile and "6B" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-NL 6B) was pre-trained on [the Pile](https://github.com/EleutherAI/the-pile), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai/). Parts of the dataset include code data.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-6B-nl")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-6B-nl")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
ABrinkmann/sbert_xtremedistil-l6-h256-uncased-mean-cosine-h32 | f24845ed1345fce0b699406babc6f6bb31682e98 | 2022-04-13T15:45:07.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
]
| sentence-similarity | false | ABrinkmann | null | ABrinkmann/sbert_xtremedistil-l6-h256-uncased-mean-cosine-h32 | 12 | null | sentence-transformers | 10,730 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# ABrinkmann/sbert_xtremedistil-l6-h256-uncased-mean-cosine-h32
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 32 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ABrinkmann/sbert_xtremedistil-l6-h256-uncased-mean-cosine-h32')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ABrinkmann/sbert_xtremedistil-l6-h256-uncased-mean-cosine-h32)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 251 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 26,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 16, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 256, 'out_features': 32, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Adrian/distilbert-base-uncased-finetuned-emotion | e57ae4c3dddd6af85d98dde9aad13a1440d75678 | 2022-04-14T22:11:34.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Adrian | null | Adrian/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,731 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.927345202022014
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2071
- Accuracy: 0.9275
- F1: 0.9273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8153 | 1.0 | 250 | 0.2942 | 0.9125 | 0.9102 |
| 0.2406 | 2.0 | 500 | 0.2071 | 0.9275 | 0.9273 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Manishkalra/finetuning-sentiment-model-4000-samples | ce1155d930c025c1e9e134a7b8eacdf241b96ab2 | 2022-04-15T05:05:50.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Manishkalra | null | Manishkalra/finetuning-sentiment-model-4000-samples | 12 | null | transformers | 10,732 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-4000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9
- name: F1
type: f1
value: 0.9038461538461539
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-4000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2706
- Accuracy: 0.9
- F1: 0.9038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
schhwmn/mt5-base-finetuned-ukr-gec | b0c565b77431bffa00cd680fe0f7f3b40a8e9e91 | 2022-05-23T07:56:33.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"uk",
"arxiv:2103.16997",
"transformers",
"gec",
"autotrain_compatible"
]
| text2text-generation | false | schhwmn | null | schhwmn/mt5-base-finetuned-ukr-gec | 12 | 1 | transformers | 10,733 | ---
language: uk
tags:
- gec
widget:
- text: "я й не думав що комп'ютерна лінгвістика це легкоо."
---
This model was finetuned on errorful sentences from the `train` subset of [UA-GEC](https://github.com/grammarly/ua-gec) corpus, introduced in [UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language](https://arxiv.org/abs/2103.16997) paper.
Only sentences containing errors were used; 8,874 sentences for training and 987 sentences for validation. The training arguments were defined as follows:
```
batch_size = 8
num_train_epochs = 6
learning_rate=5e-5
weight_decay=0.01
optim = "adafactor"
``` |
choondrise/antonio | a6c62faa669ed601f9910840d07f5d6bbc1cf35d | 2022-04-16T10:36:34.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | choondrise | null | choondrise/antonio | 12 | null | transformers | 10,734 | Entry not found |
Xuan-Rui/ipet-1000-all | da1f05062a28e0653800c81aded38cf32d1c85f8 | 2022-04-17T14:58:25.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Xuan-Rui | null | Xuan-Rui/ipet-1000-all | 12 | null | transformers | 10,735 | Entry not found |
4m1g0/wav2vec2-large-xls-r-300m-gl-jupyter4 | 52e4767cf21404859922d779752ec25eea378955 | 2022-04-18T19:59:47.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | 4m1g0 | null | 4m1g0/wav2vec2-large-xls-r-300m-gl-jupyter4 | 12 | null | transformers | 10,736 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-gl-jupyter4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-gl-jupyter4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0970
- Wer: 0.0636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 45
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.492 | 3.36 | 400 | 0.3109 | 0.3158 |
| 0.194 | 6.72 | 800 | 0.1279 | 0.1454 |
| 0.0794 | 10.08 | 1200 | 0.1210 | 0.1240 |
| 0.0565 | 13.44 | 1600 | 0.1209 | 0.1150 |
| 0.041 | 16.8 | 2000 | 0.1186 | 0.1107 |
| 0.0343 | 20.17 | 2400 | 0.1143 | 0.0933 |
| 0.0283 | 23.53 | 2800 | 0.1067 | 0.0900 |
| 0.0231 | 26.89 | 3200 | 0.1076 | 0.0812 |
| 0.0176 | 30.25 | 3600 | 0.1094 | 0.0780 |
| 0.0169 | 33.61 | 4000 | 0.1041 | 0.0766 |
| 0.0138 | 36.97 | 4400 | 0.1012 | 0.0711 |
| 0.0109 | 40.33 | 4800 | 0.0985 | 0.0655 |
| 0.0099 | 43.69 | 5200 | 0.0970 | 0.0636 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sujitpal/clip-imageclef | b01520a1986989f179bab4738f79f6fee256cda8 | 2022-04-18T22:24:45.000Z | [
"pytorch",
"clip",
"feature-extraction",
"en",
"transformers",
"multimodal",
"language",
"vision",
"image-search",
"license:mit"
]
| feature-extraction | false | sujitpal | null | sujitpal/clip-imageclef | 12 | 1 | transformers | 10,737 | ---
language:
- en
tags:
- multimodal
- language
- vision
- image-search
- pytorch
license:
- mit
metrics:
- MRR
---
### Model Card: clip-imageclef
### Model Details
[OpenAI CLIP model](https://openai.com/blog/clip/) fine-tuned using image-caption pairs from the [Caption Prediction dataset](https://www.imageclef.org/2017/caption) provided for the ImageCLEF 2017 competition. The model was evaluated using before and after fine-tuning, MRR@10 were 0.57 and 0.88 respectively.
### Model Date
September 6, 2021
### Model Type
The base model is the OpenAI CLIP model. It uses a ViT-B/32 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss.
### Fine-tuning
The fine-tuning can be reproduced using code from the Github repository [elsevierlabs-os/clip-image-search]([https://github.com/elsevierlabs-os/clip-image-search#fine-tuning).
### Usage
```python
from transformers import CLIPModel, CLIPProcessor
model = CLIPModel.from_pretrained("sujitpal/clip-imageclef")
processor = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
inputs = processor(text=captions, images=images,
return_tensors="pt", padding=True)
output = model(**inputs)
```
### Performance
| Model-name | k=1 | k=3 | k=5 | k=10 | k=20 |
| -------------------------------- | ----- | ----- | ----- | ----- | ----- |
| zero-shot CLIP (baseline) | 0.426 | 0.534 | 0.558 | 0.573 | 0.578 |
| clip-imageclef (this model) | 0.802 | 0.872 | 0.877 | 0.879 | 0.880 |
|
Intel/bert-base-uncased-mrpc-int8-static | 3241dc5bf9958c1576bfb6abaded5ce71da559e0 | 2022-06-10T02:40:01.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:mrpc",
"transformers",
"text-classfication",
"int8",
"Intel® Neural Compressor",
"PostTrainingStatic",
"license:apache-2.0"
]
| text-classification | false | Intel | null | Intel/bert-base-uncased-mrpc-int8-static | 12 | null | transformers | 10,738 | ---
language: en
license: apache-2.0
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingStatic
datasets:
- mrpc
metrics:
- f1
---
# INT8 BERT base uncased finetuned MRPC
### Post-training static quantization
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [Intel/bert-base-uncased-mrpc](https://huggingface.co/Intel/bert-base-uncased-mrpc).
The calibration dataloader is the train dataloader. The default calibration sampling size 300 isn't divisible exactly by batch size 8, so the real sampling size is 304.
The linear module **bert.encoder.layer.9.output.dense, bert.encoder.layer.10.output.dense** falls back to fp32 to meet the 1% relative accuracy loss.
### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.8997|0.9042|
| **Model size (MB)** |120|418|
### Load with Intel® Neural Compressor:
```python
from neural_compressor.utils.load_huggingface import OptimizedModel
int8_model = OptimizedModel.from_pretrained(
'Intel/bert-base-uncased-mrpc-int8-static',
)
```
|
nielsr/segformer-finetuned-sidewalk-10k-steps | afb242aa33339ebcec7481c977e23df9e72798ff | 2022-04-20T15:43:58.000Z | [
"pytorch",
"tensorboard",
"segformer",
"transformers",
"image-segmentation",
"vision",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| image-segmentation | false | nielsr | null | nielsr/segformer-finetuned-sidewalk-10k-steps | 12 | 1 | transformers | 10,739 | ---
license: apache-2.0
tags:
- image-segmentation
- vision
- generated_from_trainer
model-index:
- name: segformer-finetuned-sidewalk-50-epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-finetuned-sidewalk-50-epochs
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6350
- Mean Iou: 0.3022
- Mean Accuracy: 0.3724
- Overall Accuracy: 0.8117
- Accuracy Unlabeled: nan
- Accuracy Flat-road: 0.8240
- Accuracy Flat-sidewalk: 0.8308
- Accuracy Flat-crosswalk: 0.7789
- Accuracy Flat-cyclinglane: 0.9052
- Accuracy Flat-parkingdriveway: 0.3152
- Accuracy Flat-railtrack: nan
- Accuracy Flat-curb: 0.4703
- Accuracy Human-person: 0.6444
- Accuracy Human-rider: 0.0
- Accuracy Vehicle-car: 0.9424
- Accuracy Vehicle-truck: 0.0
- Accuracy Vehicle-bus: 0.0
- Accuracy Vehicle-tramtrain: 0.0
- Accuracy Vehicle-motorcycle: 0.0
- Accuracy Vehicle-bicycle: 0.7116
- Accuracy Vehicle-caravan: 0.0
- Accuracy Vehicle-cartrailer: 0.0
- Accuracy Construction-building: 0.8716
- Accuracy Construction-door: 0.0
- Accuracy Construction-wall: 0.4736
- Accuracy Construction-fenceguardrail: 0.5408
- Accuracy Construction-bridge: 0.0
- Accuracy Construction-tunnel: nan
- Accuracy Construction-stairs: 0.0048
- Accuracy Object-pole: 0.4202
- Accuracy Object-trafficsign: 0.0754
- Accuracy Object-trafficlight: 0.0
- Accuracy Nature-vegetation: 0.9437
- Accuracy Nature-terrain: 0.8196
- Accuracy Sky: 0.9525
- Accuracy Void-ground: 0.0
- Accuracy Void-dynamic: 0.1041
- Accuracy Void-static: 0.2872
- Accuracy Void-unclear: 0.0
- Iou Unlabeled: nan
- Iou Flat-road: 0.7413
- Iou Flat-sidewalk: 0.7520
- Iou Flat-crosswalk: 0.7629
- Iou Flat-cyclinglane: 0.4453
- Iou Flat-parkingdriveway: 0.2976
- Iou Flat-railtrack: nan
- Iou Flat-curb: 0.3701
- Iou Human-person: 0.4953
- Iou Human-rider: 0.0
- Iou Vehicle-car: 0.7962
- Iou Vehicle-truck: 0.0
- Iou Vehicle-bus: 0.0
- Iou Vehicle-tramtrain: 0.0
- Iou Vehicle-motorcycle: 0.0
- Iou Vehicle-bicycle: 0.4152
- Iou Vehicle-caravan: 0.0
- Iou Vehicle-cartrailer: 0.0
- Iou Construction-building: 0.6712
- Iou Construction-door: 0.0
- Iou Construction-wall: 0.3749
- Iou Construction-fenceguardrail: 0.4613
- Iou Construction-bridge: 0.0
- Iou Construction-tunnel: nan
- Iou Construction-stairs: 0.0048
- Iou Object-pole: 0.2337
- Iou Object-trafficsign: 0.0753
- Iou Object-trafficlight: 0.0
- Iou Nature-vegetation: 0.8324
- Iou Nature-terrain: 0.7277
- Iou Sky: 0.9234
- Iou Void-ground: 0.0
- Iou Void-dynamic: 0.0913
- Iou Void-static: 0.1997
- Iou Void-unclear: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Flat-road | Accuracy Flat-sidewalk | Accuracy Flat-crosswalk | Accuracy Flat-cyclinglane | Accuracy Flat-parkingdriveway | Accuracy Flat-railtrack | Accuracy Flat-curb | Accuracy Human-person | Accuracy Human-rider | Accuracy Vehicle-car | Accuracy Vehicle-truck | Accuracy Vehicle-bus | Accuracy Vehicle-tramtrain | Accuracy Vehicle-motorcycle | Accuracy Vehicle-bicycle | Accuracy Vehicle-caravan | Accuracy Vehicle-cartrailer | Accuracy Construction-building | Accuracy Construction-door | Accuracy Construction-wall | Accuracy Construction-fenceguardrail | Accuracy Construction-bridge | Accuracy Construction-tunnel | Accuracy Construction-stairs | Accuracy Object-pole | Accuracy Object-trafficsign | Accuracy Object-trafficlight | Accuracy Nature-vegetation | Accuracy Nature-terrain | Accuracy Sky | Accuracy Void-ground | Accuracy Void-dynamic | Accuracy Void-static | Accuracy Void-unclear | Iou Unlabeled | Iou Flat-road | Iou Flat-sidewalk | Iou Flat-crosswalk | Iou Flat-cyclinglane | Iou Flat-parkingdriveway | Iou Flat-railtrack | Iou Flat-curb | Iou Human-person | Iou Human-rider | Iou Vehicle-car | Iou Vehicle-truck | Iou Vehicle-bus | Iou Vehicle-tramtrain | Iou Vehicle-motorcycle | Iou Vehicle-bicycle | Iou Vehicle-caravan | Iou Vehicle-cartrailer | Iou Construction-building | Iou Construction-door | Iou Construction-wall | Iou Construction-fenceguardrail | Iou Construction-bridge | Iou Construction-tunnel | Iou Construction-stairs | Iou Object-pole | Iou Object-trafficsign | Iou Object-trafficlight | Iou Nature-vegetation | Iou Nature-terrain | Iou Sky | Iou Void-ground | Iou Void-dynamic | Iou Void-static | Iou Void-unclear |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:------------------:|:----------------------:|:-----------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------:|:---------------------:|:--------------------:|:--------------------:|:----------------------:|:--------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------:|:---------------------------:|:------------------------------:|:--------------------------:|:--------------------------:|:------------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:--------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-----------------------:|:------------:|:--------------------:|:---------------------:|:--------------------:|:---------------------:|:-------------:|:-------------:|:-----------------:|:------------------:|:--------------------:|:------------------------:|:------------------:|:-------------:|:----------------:|:---------------:|:---------------:|:-----------------:|:---------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------:|:----------------------:|:-------------------------:|:---------------------:|:---------------------:|:-------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:---------------:|:----------------------:|:-----------------------:|:---------------------:|:------------------:|:-------:|:---------------:|:----------------:|:---------------:|:----------------:|
| 2.4745 | 1.85 | 100 | 1.7861 | 0.1056 | 0.1555 | 0.6397 | nan | 0.2287 | 0.9278 | 0.0 | 0.1406 | 0.0032 | nan | 0.0 | 0.0 | 0.0 | 0.7757 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8764 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8387 | 0.8794 | 0.3057 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.1931 | 0.6432 | 0.0 | 0.1380 | 0.0031 | nan | 0.0 | 0.0 | 0.0 | 0.5312 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4482 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6323 | 0.4860 | 0.3053 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.7294 | 3.7 | 200 | 1.3129 | 0.1517 | 0.1996 | 0.7410 | nan | 0.7928 | 0.8830 | 0.0 | 0.6053 | 0.0089 | nan | 0.0 | 0.0 | 0.0 | 0.7837 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8530 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9138 | 0.7742 | 0.7740 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5519 | 0.7788 | 0.0 | 0.5131 | 0.0088 | nan | 0.0 | 0.0 | 0.0 | 0.5804 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5005 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6747 | 0.5247 | 0.7209 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4479 | 5.56 | 300 | 1.1309 | 0.1608 | 0.2113 | 0.7588 | nan | 0.7973 | 0.9008 | 0.0 | 0.7721 | 0.0269 | nan | 0.0 | 0.0 | 0.0 | 0.8744 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8581 | 0.0 | 0.0007 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8622 | 0.8707 | 0.7985 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5861 | 0.7816 | 0.0 | 0.5877 | 0.0261 | nan | 0.0 | 0.0 | 0.0 | 0.6119 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5582 | 0.0 | 0.0007 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7024 | 0.5206 | 0.7706 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.2348 | 7.41 | 400 | 0.9644 | 0.1707 | 0.2170 | 0.7736 | nan | 0.8125 | 0.9218 | 0.0 | 0.7596 | 0.1081 | nan | 0.0000 | 0.0 | 0.0 | 0.9080 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8280 | 0.0 | 0.0334 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8856 | 0.8260 | 0.8612 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6003 | 0.7937 | 0.0 | 0.6538 | 0.0997 | nan | 0.0000 | 0.0 | 0.0 | 0.6189 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5731 | 0.0 | 0.0330 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7147 | 0.5601 | 0.8139 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.0762 | 9.26 | 500 | 0.8819 | 0.1722 | 0.2159 | 0.7748 | nan | 0.7512 | 0.9353 | 0.0 | 0.7565 | 0.1204 | nan | 0.0016 | 0.0 | 0.0 | 0.9115 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8689 | 0.0 | 0.0565 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9098 | 0.7664 | 0.8303 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5993 | 0.7850 | 0.0 | 0.6536 | 0.1052 | nan | 0.0016 | 0.0 | 0.0 | 0.6377 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5767 | 0.0 | 0.0547 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7285 | 0.5709 | 0.7984 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9933 | 11.11 | 600 | 0.8347 | 0.1814 | 0.2263 | 0.7822 | nan | 0.8064 | 0.9111 | 0.0 | 0.7880 | 0.1443 | nan | 0.0436 | 0.0 | 0.0 | 0.8944 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8970 | 0.0 | 0.1914 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9053 | 0.8080 | 0.8526 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6088 | 0.8045 | 0.0 | 0.6845 | 0.1255 | nan | 0.0419 | 0.0 | 0.0 | 0.6594 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5548 | 0.0 | 0.1585 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7440 | 0.6068 | 0.8176 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9424 | 12.96 | 700 | 0.8428 | 0.1824 | 0.2271 | 0.7704 | nan | 0.6767 | 0.9270 | 0.0475 | 0.7655 | 0.1322 | nan | 0.2020 | 0.0189 | 0.0 | 0.8410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.9205 | 0.0 | 0.2568 | 0.0 | 0.0 | nan | 0.0 | 0.0023 | 0.0 | 0.0 | 0.8994 | 0.7347 | 0.8413 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5838 | 0.7914 | 0.0475 | 0.6091 | 0.1095 | nan | 0.1597 | 0.0185 | 0.0 | 0.6706 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5131 | 0.0 | 0.1872 | 0.0 | 0.0 | nan | 0.0 | 0.0023 | 0.0 | 0.0 | 0.7525 | 0.5837 | 0.8077 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8673 | 14.81 | 800 | 0.7934 | 0.2089 | 0.2509 | 0.7818 | nan | 0.6854 | 0.9394 | 0.7072 | 0.7240 | 0.1504 | nan | 0.2013 | 0.0186 | 0.0 | 0.9071 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.9037 | 0.0 | 0.3110 | 0.0 | 0.0 | nan | 0.0 | 0.0108 | 0.0 | 0.0 | 0.8990 | 0.7171 | 0.8513 | 0.0 | 0.0 | 0.0013 | 0.0 | nan | 0.5914 | 0.7755 | 0.6900 | 0.6673 | 0.1340 | nan | 0.1542 | 0.0183 | 0.0 | 0.6792 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5639 | 0.0 | 0.2172 | 0.0 | 0.0 | nan | 0.0 | 0.0100 | 0.0 | 0.0 | 0.7615 | 0.6014 | 0.8192 | 0.0 | 0.0 | 0.0013 | 0.0 |
| 0.8126 | 16.67 | 900 | 0.7484 | 0.2268 | 0.2784 | 0.7940 | nan | 0.6791 | 0.9397 | 0.7812 | 0.8009 | 0.1532 | nan | 0.3244 | 0.2962 | 0.0 | 0.9018 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8567 | 0.0 | 0.4772 | 0.0002 | 0.0 | nan | 0.0 | 0.0834 | 0.0 | 0.0 | 0.8992 | 0.8280 | 0.8837 | 0.0 | 0.0 | 0.0032 | 0.0 | nan | 0.6303 | 0.7968 | 0.7079 | 0.6095 | 0.1396 | nan | 0.2196 | 0.2638 | 0.0 | 0.7100 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6016 | 0.0 | 0.2860 | 0.0002 | 0.0 | nan | 0.0 | 0.0570 | 0.0 | 0.0 | 0.7678 | 0.6211 | 0.8416 | 0.0 | 0.0 | 0.0032 | 0.0 |
| 0.7989 | 18.52 | 1000 | 0.7241 | 0.2279 | 0.2803 | 0.8018 | nan | 0.7224 | 0.9402 | 0.7875 | 0.8234 | 0.1793 | nan | 0.3763 | 0.1974 | 0.0 | 0.9259 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8911 | 0.0 | 0.3994 | 0.0029 | 0.0 | nan | 0.0 | 0.0758 | 0.0 | 0.0 | 0.8619 | 0.8774 | 0.8854 | 0.0 | 0.0 | 0.0225 | 0.0 | nan | 0.6579 | 0.8292 | 0.7198 | 0.6924 | 0.1660 | nan | 0.2392 | 0.1794 | 0.0 | 0.6748 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5766 | 0.0 | 0.2654 | 0.0029 | 0.0 | nan | 0.0 | 0.0636 | 0.0 | 0.0 | 0.7582 | 0.5994 | 0.8455 | 0.0 | 0.0 | 0.0220 | 0.0 |
| 0.7429 | 20.37 | 1100 | 0.7321 | 0.2276 | 0.2862 | 0.7876 | nan | 0.8321 | 0.8491 | 0.7958 | 0.8572 | 0.2216 | nan | 0.3030 | 0.2864 | 0.0 | 0.9456 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8668 | 0.0 | 0.3757 | 0.0040 | 0.0 | nan | 0.0 | 0.1140 | 0.0 | 0.0 | 0.8839 | 0.8499 | 0.9228 | 0.0 | 0.0 | 0.0505 | 0.0 | nan | 0.6678 | 0.7848 | 0.7342 | 0.5048 | 0.1995 | nan | 0.2316 | 0.2463 | 0.0 | 0.6379 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5916 | 0.0 | 0.2668 | 0.0040 | 0.0 | nan | 0.0 | 0.0820 | 0.0 | 0.0 | 0.7827 | 0.6428 | 0.8583 | 0.0 | 0.0 | 0.0465 | 0.0 |
| 0.7131 | 22.22 | 1200 | 0.7231 | 0.2377 | 0.2995 | 0.7870 | nan | 0.8306 | 0.8458 | 0.7952 | 0.8505 | 0.2218 | nan | 0.3614 | 0.5001 | 0.0 | 0.9504 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7598 | 0.0 | 0.5317 | 0.0405 | 0.0 | nan | 0.0 | 0.1381 | 0.0 | 0.0 | 0.9284 | 0.7938 | 0.9110 | 0.0 | 0.0 | 0.1262 | 0.0 | nan | 0.7038 | 0.7740 | 0.7537 | 0.4538 | 0.1996 | nan | 0.2521 | 0.3853 | 0.0 | 0.6576 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6157 | 0.0 | 0.3046 | 0.0404 | 0.0 | nan | 0.0 | 0.0921 | 0.0 | 0.0 | 0.7846 | 0.6383 | 0.8588 | 0.0 | 0.0 | 0.0911 | 0.0 |
| 0.6919 | 24.07 | 1300 | 0.6775 | 0.2361 | 0.2885 | 0.8013 | nan | 0.7728 | 0.9073 | 0.8010 | 0.8366 | 0.1547 | nan | 0.3070 | 0.3428 | 0.0 | 0.9272 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8568 | 0.0 | 0.5009 | 0.0736 | 0.0 | nan | 0.0 | 0.0975 | 0.0 | 0.0 | 0.9297 | 0.7567 | 0.8978 | 0.0 | 0.0 | 0.0682 | 0.0 | nan | 0.6564 | 0.7929 | 0.6932 | 0.6396 | 0.1438 | nan | 0.2385 | 0.2888 | 0.0 | 0.6807 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6085 | 0.0 | 0.3114 | 0.0729 | 0.0 | nan | 0.0 | 0.0803 | 0.0 | 0.0 | 0.7857 | 0.6403 | 0.8601 | 0.0 | 0.0 | 0.0610 | 0.0 |
| 0.68 | 25.93 | 1400 | 0.6321 | 0.2575 | 0.3109 | 0.8181 | nan | 0.7851 | 0.9362 | 0.8041 | 0.8438 | 0.1694 | nan | 0.3956 | 0.5626 | 0.0 | 0.9306 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8313 | 0.0 | 0.5073 | 0.2728 | 0.0 | nan | 0.0 | 0.1741 | 0.0 | 0.0 | 0.9221 | 0.7899 | 0.9071 | 0.0 | 0.0 | 0.1157 | 0.0 | nan | 0.6781 | 0.8336 | 0.7386 | 0.7047 | 0.1564 | nan | 0.2789 | 0.4291 | 0.0 | 0.6934 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6062 | 0.0 | 0.3305 | 0.2579 | 0.0 | nan | 0.0 | 0.1228 | 0.0 | 0.0 | 0.7952 | 0.6651 | 0.8631 | 0.0 | 0.0 | 0.0865 | 0.0 |
| 0.6644 | 27.78 | 1500 | 0.6568 | 0.2555 | 0.3132 | 0.8074 | nan | 0.7687 | 0.9014 | 0.7631 | 0.8302 | 0.1869 | nan | 0.4841 | 0.4880 | 0.0 | 0.9294 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.8139 | 0.0 | 0.5482 | 0.3042 | 0.0 | nan | 0.0 | 0.1974 | 0.0 | 0.0 | 0.9225 | 0.8543 | 0.9042 | 0.0 | 0.0 | 0.1259 | 0.0 | nan | 0.6723 | 0.8030 | 0.7443 | 0.5873 | 0.1742 | nan | 0.3013 | 0.3813 | 0.0 | 0.7117 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.6159 | 0.0 | 0.3289 | 0.2810 | 0.0 | nan | 0.0 | 0.1295 | 0.0 | 0.0 | 0.8015 | 0.6848 | 0.8665 | 0.0 | 0.0 | 0.0931 | 0.0 |
| 0.6153 | 29.63 | 1600 | 0.6157 | 0.2586 | 0.3131 | 0.8188 | nan | 0.8000 | 0.9242 | 0.7980 | 0.8445 | 0.1758 | nan | 0.4143 | 0.6256 | 0.0 | 0.9155 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.8792 | 0.0 | 0.4465 | 0.2182 | 0.0 | nan | 0.0 | 0.1970 | 0.0 | 0.0 | 0.9111 | 0.8171 | 0.9368 | 0.0 | 0.0 | 0.1136 | 0.0 | nan | 0.6844 | 0.8212 | 0.7565 | 0.6537 | 0.1636 | nan | 0.2857 | 0.4354 | 0.0 | 0.7222 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.6274 | 0.0 | 0.3217 | 0.2147 | 0.0 | nan | 0.0 | 0.1313 | 0.0 | 0.0 | 0.8082 | 0.6809 | 0.8737 | 0.0 | 0.0 | 0.0926 | 0.0 |
| 0.6154 | 31.48 | 1700 | 0.6397 | 0.2621 | 0.3204 | 0.8117 | nan | 0.8357 | 0.8840 | 0.7908 | 0.8465 | 0.2590 | nan | 0.4050 | 0.5401 | 0.0 | 0.9393 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0105 | 0.0 | 0.0 | 0.8169 | 0.0 | 0.4733 | 0.3188 | 0.0 | nan | 0.0 | 0.2505 | 0.0 | 0.0 | 0.9181 | 0.8473 | 0.9287 | 0.0 | 0.0 | 0.1890 | 0.0 | nan | 0.6774 | 0.8042 | 0.7524 | 0.5662 | 0.2300 | nan | 0.2971 | 0.4050 | 0.0 | 0.6970 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0105 | 0.0 | 0.0 | 0.6489 | 0.0 | 0.3454 | 0.3058 | 0.0 | nan | 0.0 | 0.1441 | 0.0 | 0.0 | 0.8074 | 0.6913 | 0.8820 | 0.0 | 0.0 | 0.1224 | 0.0 |
| 0.6305 | 33.33 | 1800 | 0.6131 | 0.2641 | 0.3212 | 0.8194 | nan | 0.8171 | 0.8984 | 0.8212 | 0.8462 | 0.2582 | nan | 0.5051 | 0.5504 | 0.0 | 0.9421 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0221 | 0.0 | 0.0 | 0.8777 | 0.0 | 0.3528 | 0.3169 | 0.0 | nan | 0.0 | 0.2249 | 0.0 | 0.0 | 0.9203 | 0.8499 | 0.9175 | 0.0 | 0.0 | 0.1587 | 0.0 | nan | 0.7209 | 0.8195 | 0.7546 | 0.6166 | 0.2267 | nan | 0.3408 | 0.4000 | 0.0 | 0.6906 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0221 | 0.0 | 0.0 | 0.6055 | 0.0 | 0.2823 | 0.3044 | 0.0 | nan | 0.0 | 0.1545 | 0.0 | 0.0 | 0.8124 | 0.6994 | 0.8799 | 0.0 | 0.0 | 0.1204 | 0.0 |
| 0.6083 | 35.19 | 1900 | 0.6224 | 0.2646 | 0.3182 | 0.8171 | nan | 0.7473 | 0.9297 | 0.7826 | 0.8269 | 0.2162 | nan | 0.4556 | 0.4982 | 0.0 | 0.9169 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0865 | 0.0 | 0.0 | 0.9031 | 0.0 | 0.3618 | 0.3583 | 0.0 | nan | 0.0 | 0.2603 | 0.0 | 0.0 | 0.8966 | 0.8828 | 0.9016 | 0.0 | 0.0 | 0.1587 | 0.0 | nan | 0.6824 | 0.8210 | 0.7645 | 0.5950 | 0.2019 | nan | 0.3166 | 0.3895 | 0.0 | 0.7307 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0853 | 0.0 | 0.0 | 0.6063 | 0.0 | 0.2860 | 0.3200 | 0.0 | nan | 0.0 | 0.1659 | 0.0 | 0.0 | 0.8188 | 0.7017 | 0.8695 | 0.0 | 0.0 | 0.1113 | 0.0 |
| 0.5847 | 37.04 | 2000 | 0.5906 | 0.2713 | 0.3209 | 0.8281 | nan | 0.7374 | 0.9612 | 0.7764 | 0.8195 | 0.2033 | nan | 0.4219 | 0.4950 | 0.0 | 0.9339 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0960 | 0.0 | 0.0 | 0.8434 | 0.0 | 0.4552 | 0.4437 | 0.0 | nan | 0.0 | 0.2250 | 0.0 | 0.0 | 0.9315 | 0.8612 | 0.9071 | 0.0 | 0.0 | 0.1567 | 0.0 | nan | 0.6883 | 0.8311 | 0.7525 | 0.6838 | 0.1851 | nan | 0.3228 | 0.3780 | 0.0 | 0.7236 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0944 | 0.0 | 0.0 | 0.6338 | 0.0 | 0.3408 | 0.3853 | 0.0 | nan | 0.0 | 0.1586 | 0.0 | 0.0 | 0.8104 | 0.6978 | 0.8800 | 0.0 | 0.0 | 0.1162 | 0.0 |
| 0.5764 | 38.89 | 2100 | 0.6088 | 0.2752 | 0.3225 | 0.8255 | nan | 0.7525 | 0.9472 | 0.7709 | 0.8441 | 0.2134 | nan | 0.3932 | 0.5383 | 0.0 | 0.9030 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3470 | 0.0 | 0.0 | 0.9195 | 0.0 | 0.3310 | 0.3215 | 0.0 | nan | 0.0 | 0.2234 | 0.0 | 0.0 | 0.9289 | 0.7964 | 0.9280 | 0.0 | 0.0 | 0.1604 | 0.0 | nan | 0.6993 | 0.8276 | 0.7546 | 0.7234 | 0.1997 | nan | 0.3005 | 0.4222 | 0.0 | 0.7348 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3123 | 0.0 | 0.0 | 0.5918 | 0.0 | 0.2787 | 0.3037 | 0.0 | nan | 0.0 | 0.1585 | 0.0 | 0.0 | 0.8124 | 0.6781 | 0.8844 | 0.0 | 0.0 | 0.1247 | 0.0 |
| 0.5787 | 40.74 | 2200 | 0.5706 | 0.2824 | 0.3351 | 0.8347 | nan | 0.8178 | 0.9369 | 0.8003 | 0.8511 | 0.2352 | nan | 0.4838 | 0.5417 | 0.0 | 0.9025 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3689 | 0.0 | 0.0 | 0.8739 | 0.0 | 0.4493 | 0.4040 | 0.0 | nan | 0.0 | 0.2524 | 0.0 | 0.0 | 0.9422 | 0.8182 | 0.9183 | 0.0 | 0.0 | 0.1276 | 0.0 | nan | 0.7292 | 0.8432 | 0.7669 | 0.6897 | 0.2161 | nan | 0.3484 | 0.4230 | 0.0 | 0.7519 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3045 | 0.0 | 0.0 | 0.6407 | 0.0 | 0.3373 | 0.3491 | 0.0 | nan | 0.0 | 0.1557 | 0.0 | 0.0 | 0.8080 | 0.6803 | 0.8850 | 0.0 | 0.0 | 0.1068 | 0.0 |
| 0.5724 | 42.59 | 2300 | 0.7562 | 0.2740 | 0.3479 | 0.7662 | nan | 0.8734 | 0.7169 | 0.7809 | 0.8847 | 0.2838 | nan | 0.3742 | 0.6758 | 0.0 | 0.9339 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6048 | 0.0 | 0.0 | 0.8535 | 0.0 | 0.4435 | 0.4729 | 0.0 | nan | 0.0 | 0.2817 | 0.0 | 0.0 | 0.9149 | 0.8765 | 0.9329 | 0.0 | 0.0 | 0.2292 | 0.0 | nan | 0.7041 | 0.6683 | 0.7628 | 0.3371 | 0.2575 | nan | 0.2878 | 0.4639 | 0.0 | 0.7454 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4190 | 0.0 | 0.0 | 0.6387 | 0.0 | 0.3357 | 0.3997 | 0.0 | nan | 0.0 | 0.1776 | 0.0 | 0.0 | 0.8183 | 0.7106 | 0.8911 | 0.0 | 0.0 | 0.1516 | 0.0 |
| 0.556 | 44.44 | 2400 | 0.7350 | 0.2665 | 0.3366 | 0.7813 | nan | 0.7897 | 0.7888 | 0.8022 | 0.8878 | 0.2389 | nan | 0.4270 | 0.4859 | 0.0 | 0.9401 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4618 | 0.0 | 0.0 | 0.8866 | 0.0 | 0.3979 | 0.5050 | 0.0 | nan | 0.0 | 0.2580 | 0.0 | 0.0 | 0.9097 | 0.8627 | 0.9337 | 0.0 | 0.0 | 0.1948 | 0.0 | nan | 0.6902 | 0.7286 | 0.7779 | 0.3964 | 0.2231 | nan | 0.3011 | 0.3626 | 0.0 | 0.7078 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3485 | 0.0 | 0.0 | 0.6171 | 0.0 | 0.3044 | 0.3372 | 0.0 | nan | 0.0 | 0.1812 | 0.0 | 0.0 | 0.8195 | 0.7011 | 0.8947 | 0.0 | 0.0 | 0.1378 | 0.0 |
| 0.5599 | 46.3 | 2500 | 0.5949 | 0.2846 | 0.3464 | 0.8215 | nan | 0.7919 | 0.9145 | 0.7935 | 0.8679 | 0.2189 | nan | 0.3795 | 0.5589 | 0.0 | 0.9334 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5627 | 0.0 | 0.0 | 0.8536 | 0.0 | 0.4394 | 0.4730 | 0.0 | nan | 0.0 | 0.3260 | 0.0 | 0.0 | 0.9098 | 0.8344 | 0.9487 | 0.0 | 0.0 | 0.2801 | 0.0 | nan | 0.6901 | 0.8199 | 0.7749 | 0.5729 | 0.2084 | nan | 0.3034 | 0.4321 | 0.0 | 0.7422 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4230 | 0.0 | 0.0 | 0.6491 | 0.0 | 0.3237 | 0.3989 | 0.0 | nan | 0.0 | 0.1963 | 0.0 | 0.0 | 0.8232 | 0.7048 | 0.8949 | 0.0 | 0.0 | 0.1489 | 0.0 |
| 0.5368 | 48.15 | 2600 | 0.6125 | 0.2829 | 0.3502 | 0.8211 | nan | 0.7798 | 0.9034 | 0.7913 | 0.9079 | 0.2587 | nan | 0.3407 | 0.6423 | 0.0 | 0.9351 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6794 | 0.0 | 0.0 | 0.8554 | 0.0 | 0.3996 | 0.4884 | 0.0 | nan | 0.0 | 0.2870 | 0.0 | 0.0 | 0.9271 | 0.8698 | 0.9424 | 0.0 | 0.0 | 0.1992 | 0.0 | nan | 0.6878 | 0.8122 | 0.7578 | 0.5597 | 0.2427 | nan | 0.2680 | 0.4737 | 0.0 | 0.7517 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3649 | 0.0 | 0.0 | 0.6557 | 0.0 | 0.3130 | 0.4117 | 0.0 | nan | 0.0 | 0.1847 | 0.0 | 0.0 | 0.8236 | 0.7137 | 0.8969 | 0.0 | 0.0 | 0.1361 | 0.0 |
| 0.5391 | 50.0 | 2700 | 0.5993 | 0.2877 | 0.3507 | 0.8242 | nan | 0.8174 | 0.8948 | 0.8094 | 0.8896 | 0.2730 | nan | 0.4105 | 0.5570 | 0.0 | 0.9164 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5439 | 0.0 | 0.0 | 0.8772 | 0.0 | 0.5070 | 0.5443 | 0.0 | nan | 0.0 | 0.2691 | 0.0 | 0.0 | 0.9205 | 0.8660 | 0.8975 | 0.0 | 0.0 | 0.2294 | 0.0 | nan | 0.7059 | 0.8214 | 0.7578 | 0.5803 | 0.2537 | nan | 0.2892 | 0.4308 | 0.0 | 0.7548 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4363 | 0.0 | 0.0 | 0.6490 | 0.0 | 0.3579 | 0.4224 | 0.0 | nan | 0.0 | 0.1927 | 0.0 | 0.0 | 0.8239 | 0.7040 | 0.8748 | 0.0 | 0.0 | 0.1516 | 0.0 |
| 0.5041 | 51.85 | 2800 | 0.5912 | 0.2859 | 0.3493 | 0.8264 | nan | 0.7593 | 0.9248 | 0.8029 | 0.8780 | 0.2945 | nan | 0.3718 | 0.6308 | 0.0 | 0.9078 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6667 | 0.0 | 0.0 | 0.8945 | 0.0 | 0.3362 | 0.4834 | 0.0 | nan | 0.0 | 0.3167 | 0.0 | 0.0 | 0.9255 | 0.8641 | 0.9382 | 0.0 | 0.0 | 0.1836 | 0.0 | nan | 0.6993 | 0.8205 | 0.7232 | 0.5789 | 0.2712 | nan | 0.2852 | 0.4872 | 0.0 | 0.7747 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3825 | 0.0 | 0.0 | 0.6382 | 0.0 | 0.2862 | 0.4138 | 0.0 | nan | 0.0 | 0.2019 | 0.0 | 0.0 | 0.8284 | 0.7271 | 0.8984 | 0.0 | 0.0 | 0.1316 | 0.0 |
| 0.5007 | 53.7 | 2900 | 0.6220 | 0.2839 | 0.3577 | 0.8134 | nan | 0.7302 | 0.8903 | 0.8180 | 0.9098 | 0.3134 | nan | 0.3521 | 0.6870 | 0.0 | 0.9429 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7288 | 0.0 | 0.0 | 0.8340 | 0.0 | 0.5169 | 0.4700 | 0.0 | nan | 0.0 | 0.3105 | 0.0 | 0.0 | 0.9356 | 0.8318 | 0.9437 | 0.0 | 0.0003 | 0.2298 | 0.0 | nan | 0.6722 | 0.8034 | 0.7257 | 0.4922 | 0.2900 | nan | 0.2639 | 0.4741 | 0.0 | 0.7434 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4082 | 0.0 | 0.0 | 0.6635 | 0.0 | 0.3690 | 0.4172 | 0.0 | nan | 0.0 | 0.1981 | 0.0 | 0.0 | 0.8205 | 0.6936 | 0.9015 | 0.0 | 0.0003 | 0.1483 | 0.0 |
| 0.4992 | 55.56 | 3000 | 0.5669 | 0.2928 | 0.3647 | 0.8317 | nan | 0.7826 | 0.9171 | 0.8018 | 0.9165 | 0.2758 | nan | 0.5273 | 0.6986 | 0.0 | 0.9410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6836 | 0.0 | 0.0 | 0.8296 | 0.0 | 0.4717 | 0.4595 | 0.0 | nan | 0.0 | 0.3613 | 0.0 | 0.0 | 0.9272 | 0.8671 | 0.9424 | 0.0 | 0.0017 | 0.2669 | 0.0 | nan | 0.7196 | 0.8377 | 0.7464 | 0.6016 | 0.2573 | nan | 0.3367 | 0.4767 | 0.0 | 0.7565 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4237 | 0.0 | 0.0 | 0.6653 | 0.0 | 0.3438 | 0.4034 | 0.0 | nan | 0.0 | 0.1974 | 0.0 | 0.0 | 0.8287 | 0.7120 | 0.9031 | 0.0 | 0.0017 | 0.1565 | 0.0 |
| 0.5151 | 57.41 | 3100 | 0.6131 | 0.2864 | 0.3598 | 0.8169 | nan | 0.7793 | 0.9005 | 0.7894 | 0.8762 | 0.2508 | nan | 0.3852 | 0.6197 | 0.0 | 0.9316 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6506 | 0.0 | 0.0 | 0.7819 | 0.0 | 0.5348 | 0.5782 | 0.0 | nan | 0.0 | 0.3853 | 0.0 | 0.0 | 0.9211 | 0.8624 | 0.9390 | 0.0 | 0.0 | 0.3278 | 0.0 | nan | 0.6967 | 0.8145 | 0.7436 | 0.5453 | 0.2362 | nan | 0.2992 | 0.4656 | 0.0 | 0.7549 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4221 | 0.0 | 0.0 | 0.6246 | 0.0 | 0.3873 | 0.3923 | 0.0 | nan | 0.0 | 0.1937 | 0.0 | 0.0 | 0.8257 | 0.7204 | 0.8994 | 0.0 | 0.0 | 0.1417 | 0.0 |
| 0.4688 | 59.26 | 3200 | 0.7342 | 0.2674 | 0.3425 | 0.7758 | nan | 0.6724 | 0.8138 | 0.8211 | 0.8881 | 0.2106 | nan | 0.3435 | 0.4240 | 0.0 | 0.9345 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6881 | 0.0 | 0.0 | 0.8684 | 0.0 | 0.4808 | 0.5494 | 0.0 | nan | 0.0 | 0.2968 | 0.0 | 0.0 | 0.9269 | 0.8322 | 0.9291 | 0.0 | 0.0 | 0.2817 | 0.0 | nan | 0.6227 | 0.7395 | 0.7654 | 0.4008 | 0.1990 | nan | 0.2434 | 0.3473 | 0.0 | 0.7526 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3733 | 0.0 | 0.0 | 0.5567 | 0.0 | 0.3425 | 0.4056 | 0.0 | nan | 0.0 | 0.2033 | 0.0 | 0.0 | 0.8238 | 0.7088 | 0.8978 | 0.0 | 0.0 | 0.1748 | 0.0 |
| 0.4657 | 61.11 | 3300 | 0.7162 | 0.2737 | 0.3487 | 0.7884 | nan | 0.6859 | 0.8395 | 0.7919 | 0.8974 | 0.2306 | nan | 0.4086 | 0.6012 | 0.0 | 0.9212 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7186 | 0.0 | 0.0 | 0.8738 | 0.0 | 0.4323 | 0.5271 | 0.0 | nan | 0.0 | 0.3163 | 0.0 | 0.0 | 0.9373 | 0.8107 | 0.9381 | 0.0 | 0.0 | 0.2280 | 0.0 | nan | 0.6253 | 0.7668 | 0.7584 | 0.4350 | 0.2180 | nan | 0.2835 | 0.4646 | 0.0 | 0.7649 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3505 | 0.0 | 0.0 | 0.5817 | 0.0 | 0.3184 | 0.4275 | 0.0 | nan | 0.0 | 0.1989 | 0.0 | 0.0 | 0.8181 | 0.6916 | 0.9021 | 0.0 | 0.0 | 0.1529 | 0.0 |
| 0.4789 | 62.96 | 3400 | 0.6510 | 0.2824 | 0.3535 | 0.8065 | nan | 0.7245 | 0.8835 | 0.7760 | 0.8886 | 0.2720 | nan | 0.3709 | 0.6675 | 0.0 | 0.9351 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6668 | 0.0 | 0.0 | 0.8450 | 0.0 | 0.4917 | 0.5508 | 0.0 | nan | 0.0 | 0.3585 | 0.0 | 0.0 | 0.9367 | 0.7684 | 0.9321 | 0.0 | 0.0022 | 0.2404 | 0.0 | nan | 0.6754 | 0.7938 | 0.7682 | 0.4856 | 0.2514 | nan | 0.2841 | 0.4779 | 0.0 | 0.7566 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3801 | 0.0 | 0.0 | 0.6118 | 0.0 | 0.3623 | 0.4464 | 0.0 | nan | 0.0 | 0.1990 | 0.0 | 0.0 | 0.8150 | 0.6727 | 0.9029 | 0.0 | 0.0022 | 0.1516 | 0.0 |
| 0.4718 | 64.81 | 3500 | 0.7369 | 0.2741 | 0.3491 | 0.7687 | nan | 0.7886 | 0.7455 | 0.8159 | 0.8865 | 0.2585 | nan | 0.3583 | 0.6014 | 0.0 | 0.9362 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6741 | 0.0 | 0.0 | 0.8728 | 0.0 | 0.4488 | 0.5138 | 0.0 | nan | 0.0 | 0.3533 | 0.0 | 0.0 | 0.9343 | 0.8363 | 0.9345 | 0.0 | 0.0002 | 0.2111 | 0.0 | nan | 0.6800 | 0.6730 | 0.7173 | 0.3412 | 0.2406 | nan | 0.2736 | 0.4651 | 0.0 | 0.7688 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3688 | 0.0 | 0.0 | 0.6494 | 0.0 | 0.3507 | 0.4403 | 0.0 | nan | 0.0 | 0.1950 | 0.0 | 0.0 | 0.8287 | 0.7216 | 0.9039 | 0.0 | 0.0002 | 0.1536 | 0.0 |
| 0.4586 | 66.67 | 3600 | 0.7463 | 0.2799 | 0.3515 | 0.7620 | nan | 0.8497 | 0.6965 | 0.7931 | 0.9041 | 0.2737 | nan | 0.3983 | 0.5616 | 0.0 | 0.9365 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5892 | 0.0 | 0.0 | 0.8439 | 0.0 | 0.5213 | 0.4720 | 0.0 | nan | 0.0 | 0.3429 | 0.0 | 0.0 | 0.9332 | 0.8690 | 0.9431 | 0.0 | 0.0 | 0.3213 | 0.0 | nan | 0.7435 | 0.6450 | 0.7808 | 0.3120 | 0.2517 | nan | 0.3134 | 0.4378 | 0.0 | 0.7305 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4349 | 0.0 | 0.0 | 0.6399 | 0.0 | 0.3813 | 0.4243 | 0.0 | nan | 0.0 | 0.2097 | 0.0 | 0.0 | 0.8287 | 0.7225 | 0.9085 | 0.0 | 0.0 | 0.1926 | 0.0 |
| 0.4506 | 68.52 | 3700 | 0.6409 | 0.2859 | 0.3587 | 0.8030 | nan | 0.7887 | 0.8394 | 0.8054 | 0.8912 | 0.2518 | nan | 0.3799 | 0.6292 | 0.0 | 0.9273 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7090 | 0.0 | 0.0 | 0.8655 | 0.0 | 0.4989 | 0.5447 | 0.0 | nan | 0.0 | 0.3519 | 0.0 | 0.0 | 0.9335 | 0.8362 | 0.9278 | 0.0 | 0.0 | 0.2975 | 0.0 | nan | 0.7248 | 0.7574 | 0.7649 | 0.4118 | 0.2326 | nan | 0.2996 | 0.4840 | 0.0 | 0.7856 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3424 | 0.0 | 0.0 | 0.6639 | 0.0 | 0.3766 | 0.4576 | 0.0 | nan | 0.0 | 0.2055 | 0.0 | 0.0 | 0.8284 | 0.7274 | 0.9032 | 0.0 | 0.0 | 0.1823 | 0.0 |
| 0.4659 | 70.37 | 3800 | 0.6466 | 0.2884 | 0.3577 | 0.8081 | nan | 0.8256 | 0.8420 | 0.7982 | 0.8692 | 0.3484 | nan | 0.4035 | 0.4964 | 0.0 | 0.9489 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6461 | 0.0 | 0.0 | 0.8281 | 0.0 | 0.5593 | 0.5404 | 0.0 | nan | 0.0 | 0.3533 | 0.0 | 0.0 | 0.9345 | 0.7861 | 0.9426 | 0.0 | 0.0 | 0.3225 | 0.0 | nan | 0.7403 | 0.7665 | 0.7649 | 0.4456 | 0.2991 | nan | 0.3198 | 0.3976 | 0.0 | 0.7512 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4217 | 0.0 | 0.0 | 0.6537 | 0.0 | 0.3859 | 0.4470 | 0.0 | nan | 0.0 | 0.2219 | 0.0 | 0.0 | 0.8223 | 0.6908 | 0.9109 | 0.0 | 0.0 | 0.1898 | 0.0 |
| 0.4416 | 72.22 | 3900 | 0.6944 | 0.2824 | 0.3648 | 0.7953 | nan | 0.8073 | 0.8044 | 0.8200 | 0.9039 | 0.2713 | nan | 0.4385 | 0.6632 | 0.0 | 0.9435 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7130 | 0.0 | 0.0 | 0.8448 | 0.0 | 0.5050 | 0.5552 | 0.0 | nan | 0.0 | 0.3791 | 0.0 | 0.0 | 0.9316 | 0.8332 | 0.9378 | 0.0 | 0.0047 | 0.3183 | 0.0 | nan | 0.7045 | 0.7445 | 0.6571 | 0.4107 | 0.2536 | nan | 0.3089 | 0.4711 | 0.0 | 0.7504 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3814 | 0.0 | 0.0 | 0.6468 | 0.0 | 0.3800 | 0.4413 | 0.0 | nan | 0.0 | 0.2243 | 0.0 | 0.0 | 0.8294 | 0.7257 | 0.9078 | 0.0 | 0.0047 | 0.1964 | 0.0 |
| 0.4347 | 74.07 | 4000 | 0.5742 | 0.2960 | 0.3615 | 0.8319 | nan | 0.8135 | 0.9088 | 0.8067 | 0.8959 | 0.3006 | nan | 0.3611 | 0.6055 | 0.0 | 0.9354 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6851 | 0.0 | 0.0 | 0.8692 | 0.0 | 0.4956 | 0.5065 | 0.0 | nan | 0.0 | 0.3493 | 0.0 | 0.0 | 0.9264 | 0.8500 | 0.9368 | 0.0 | 0.0018 | 0.3210 | 0.0 | nan | 0.7436 | 0.8254 | 0.7615 | 0.5609 | 0.2797 | nan | 0.3045 | 0.4733 | 0.0 | 0.7745 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4006 | 0.0 | 0.0 | 0.6424 | 0.0 | 0.3800 | 0.4600 | 0.0 | nan | 0.0 | 0.2126 | 0.0 | 0.0 | 0.8296 | 0.7251 | 0.9085 | 0.0 | 0.0018 | 0.1876 | 0.0 |
| 0.4191 | 75.93 | 4100 | 0.6454 | 0.2879 | 0.3671 | 0.8068 | nan | 0.7757 | 0.8432 | 0.8171 | 0.8803 | 0.3169 | nan | 0.4971 | 0.6474 | 0.0 | 0.9274 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7272 | 0.0 | 0.0 | 0.8520 | 0.0 | 0.4847 | 0.5414 | 0.0 | nan | 0.0 | 0.4113 | 0.0 | 0.0 | 0.9400 | 0.8335 | 0.9348 | 0.0 | 0.0167 | 0.3000 | 0.0 | nan | 0.7112 | 0.7615 | 0.6876 | 0.4533 | 0.2904 | nan | 0.3375 | 0.4768 | 0.0 | 0.7857 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3483 | 0.0 | 0.0 | 0.6544 | 0.0 | 0.3636 | 0.4546 | 0.0 | nan | 0.0 | 0.2086 | 0.0 | 0.0 | 0.8293 | 0.7293 | 0.9093 | 0.0 | 0.0165 | 0.1938 | 0.0 |
| 0.4355 | 77.78 | 4200 | 0.5871 | 0.2915 | 0.3601 | 0.8236 | nan | 0.6673 | 0.9324 | 0.8063 | 0.8730 | 0.2988 | nan | 0.5014 | 0.5734 | 0.0 | 0.9480 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6629 | 0.0 | 0.0 | 0.8653 | 0.0 | 0.4649 | 0.5559 | 0.0 | nan | 0.0 | 0.3890 | 0.0 | 0.0 | 0.9183 | 0.8681 | 0.9537 | 0.0 | 0.0088 | 0.2359 | 0.0 | nan | 0.6266 | 0.8175 | 0.7309 | 0.5730 | 0.2746 | nan | 0.3471 | 0.4465 | 0.0 | 0.7567 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4103 | 0.0 | 0.0 | 0.6684 | 0.0 | 0.3482 | 0.4615 | 0.0 | nan | 0.0 | 0.2062 | 0.0 | 0.0 | 0.8356 | 0.7347 | 0.9131 | 0.0 | 0.0088 | 0.1686 | 0.0 |
| 0.431 | 79.63 | 4300 | 0.5778 | 0.2902 | 0.3540 | 0.8266 | nan | 0.8325 | 0.9042 | 0.7971 | 0.8575 | 0.2707 | nan | 0.4318 | 0.5731 | 0.0 | 0.9428 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6701 | 0.0 | 0.0 | 0.8781 | 0.0 | 0.4081 | 0.5480 | 0.0 | nan | 0.0 | 0.3573 | 0.0 | 0.0 | 0.9299 | 0.7480 | 0.9397 | 0.0 | 0.0343 | 0.2046 | 0.0 | nan | 0.7428 | 0.8112 | 0.7719 | 0.5907 | 0.2545 | nan | 0.3259 | 0.4272 | 0.0 | 0.7505 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4255 | 0.0 | 0.0 | 0.6496 | 0.0 | 0.3209 | 0.4384 | 0.0 | nan | 0.0 | 0.2061 | 0.0 | 0.0 | 0.8142 | 0.6646 | 0.9118 | 0.0 | 0.0338 | 0.1477 | 0.0 |
| 0.4105 | 81.48 | 4400 | 0.7355 | 0.2837 | 0.3547 | 0.7802 | nan | 0.8194 | 0.7548 | 0.8125 | 0.9004 | 0.2421 | nan | 0.4411 | 0.5260 | 0.0 | 0.9344 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6628 | 0.0 | 0.0 | 0.9003 | 0.0 | 0.4114 | 0.5457 | 0.0 | nan | 0.0 | 0.3720 | 0.0 | 0.0 | 0.9386 | 0.8336 | 0.9269 | 0.0 | 0.0905 | 0.2364 | 0.0 | nan | 0.7295 | 0.6964 | 0.7754 | 0.3477 | 0.2325 | nan | 0.3336 | 0.4069 | 0.0 | 0.7641 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4284 | 0.0 | 0.0 | 0.6483 | 0.0 | 0.3512 | 0.4444 | 0.0 | nan | 0.0 | 0.2140 | 0.0 | 0.0 | 0.8260 | 0.7200 | 0.9047 | 0.0 | 0.0883 | 0.1667 | 0.0 |
| 0.4102 | 83.33 | 4500 | 0.6431 | 0.2832 | 0.3550 | 0.8023 | nan | 0.6173 | 0.8926 | 0.8233 | 0.8684 | 0.3015 | nan | 0.4774 | 0.5853 | 0.0 | 0.9435 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7118 | 0.0 | 0.0 | 0.8678 | 0.0 | 0.4544 | 0.5288 | 0.0 | nan | 0.0 | 0.3435 | 0.0 | 0.0 | 0.9438 | 0.7934 | 0.9323 | 0.0 | 0.0264 | 0.2495 | 0.0 | nan | 0.5793 | 0.7784 | 0.7849 | 0.5220 | 0.2750 | nan | 0.3433 | 0.4263 | 0.0 | 0.7478 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3651 | 0.0 | 0.0 | 0.6236 | 0.0 | 0.3489 | 0.4347 | 0.0 | nan | 0.0 | 0.2243 | 0.0 | 0.0 | 0.8184 | 0.6879 | 0.9082 | 0.0 | 0.0258 | 0.1674 | 0.0 |
| 0.4172 | 85.19 | 4600 | 0.6988 | 0.2875 | 0.3537 | 0.7940 | nan | 0.7505 | 0.8194 | 0.8168 | 0.9128 | 0.2640 | nan | 0.4022 | 0.4961 | 0.0 | 0.9391 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6453 | 0.0 | 0.0 | 0.8769 | 0.0 | 0.4600 | 0.5182 | 0.0 | nan | 0.0 | 0.3740 | 0.0 | 0.0 | 0.9378 | 0.8263 | 0.9455 | 0.0 | 0.0900 | 0.2436 | 0.0 | nan | 0.7048 | 0.7401 | 0.7654 | 0.3938 | 0.2454 | nan | 0.2874 | 0.3973 | 0.0 | 0.7572 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4779 | 0.0 | 0.0 | 0.6427 | 0.0 | 0.3531 | 0.4565 | 0.0 | nan | 0.0 | 0.2402 | 0.0 | 0.0 | 0.8333 | 0.7320 | 0.9149 | 0.0 | 0.0880 | 0.1706 | 0.0 |
| 0.3885 | 87.04 | 4700 | 0.5978 | 0.2953 | 0.3647 | 0.8175 | nan | 0.8142 | 0.8718 | 0.8027 | 0.8554 | 0.3059 | nan | 0.3787 | 0.5867 | 0.0 | 0.9403 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6845 | 0.0 | 0.0 | 0.8471 | 0.0 | 0.5315 | 0.5788 | 0.0 | nan | 0.0 | 0.3874 | 0.0 | 0.0 | 0.9354 | 0.8156 | 0.9494 | 0.0 | 0.1221 | 0.2636 | 0.0 | nan | 0.7263 | 0.7825 | 0.7874 | 0.4784 | 0.2859 | nan | 0.2981 | 0.4480 | 0.0 | 0.7604 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3820 | 0.0 | 0.0 | 0.6694 | 0.0 | 0.3781 | 0.4545 | 0.0 | nan | 0.0 | 0.2385 | 0.0 | 0.0 | 0.8301 | 0.7216 | 0.9144 | 0.0 | 0.1131 | 0.1798 | 0.0 |
| 0.3949 | 88.89 | 4800 | 0.5747 | 0.2961 | 0.3643 | 0.8282 | nan | 0.8129 | 0.8976 | 0.8121 | 0.8713 | 0.2894 | nan | 0.4694 | 0.5562 | 0.0 | 0.9391 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6947 | 0.0 | 0.0 | 0.8395 | 0.0 | 0.5260 | 0.5481 | 0.0 | nan | 0.0 | 0.3852 | 0.0 | 0.0 | 0.9428 | 0.8221 | 0.9365 | 0.0 | 0.0559 | 0.2580 | 0.0 | nan | 0.7394 | 0.8130 | 0.7924 | 0.5533 | 0.2658 | nan | 0.3447 | 0.4378 | 0.0 | 0.7620 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3851 | 0.0 | 0.0 | 0.6633 | 0.0 | 0.3722 | 0.4533 | 0.0 | nan | 0.0 | 0.2184 | 0.0 | 0.0 | 0.8217 | 0.7122 | 0.9124 | 0.0 | 0.0534 | 0.1742 | 0.0 |
| 0.4158 | 90.74 | 4900 | 0.6449 | 0.2916 | 0.3657 | 0.8070 | nan | 0.8043 | 0.8271 | 0.8157 | 0.9192 | 0.3073 | nan | 0.4380 | 0.6344 | 0.0 | 0.9340 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7171 | 0.0 | 0.0 | 0.8572 | 0.0 | 0.5188 | 0.5406 | 0.0 | nan | 0.0 | 0.3852 | 0.0 | 0.0 | 0.9420 | 0.8552 | 0.9459 | 0.0 | 0.0450 | 0.2148 | 0.0 | nan | 0.6975 | 0.7564 | 0.7902 | 0.4563 | 0.2853 | nan | 0.3171 | 0.4654 | 0.0 | 0.7879 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3571 | 0.0 | 0.0 | 0.6623 | 0.0 | 0.3819 | 0.4583 | 0.0 | nan | 0.0 | 0.2243 | 0.0 | 0.0 | 0.8302 | 0.7431 | 0.9150 | 0.0 | 0.0421 | 0.1602 | 0.0 |
| 0.3856 | 92.59 | 5000 | 0.7492 | 0.2796 | 0.3559 | 0.7680 | nan | 0.8020 | 0.7250 | 0.8248 | 0.9139 | 0.2500 | nan | 0.3621 | 0.5930 | 0.0 | 0.9411 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6964 | 0.0 | 0.0 | 0.9036 | 0.0 | 0.3460 | 0.5234 | 0.0 | nan | 0.0 | 0.4271 | 0.0 | 0.0 | 0.9255 | 0.8871 | 0.9524 | 0.0 | 0.0666 | 0.2471 | 0.0 | nan | 0.6954 | 0.6697 | 0.7878 | 0.3256 | 0.2365 | nan | 0.2864 | 0.4452 | 0.0 | 0.7724 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3838 | 0.0 | 0.0 | 0.6413 | 0.0 | 0.2968 | 0.4239 | 0.0 | nan | 0.0 | 0.2271 | 0.0 | 0.0 | 0.8382 | 0.7554 | 0.9171 | 0.0 | 0.0624 | 0.1808 | 0.0 |
| 0.3915 | 94.44 | 5100 | 0.6402 | 0.2893 | 0.3608 | 0.8012 | nan | 0.7614 | 0.8406 | 0.7898 | 0.9029 | 0.3080 | nan | 0.3857 | 0.6328 | 0.0 | 0.9373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7010 | 0.0 | 0.0 | 0.8626 | 0.0 | 0.5045 | 0.5235 | 0.0 | nan | 0.0 | 0.3802 | 0.0 | 0.0 | 0.9442 | 0.7561 | 0.9401 | 0.0 | 0.1133 | 0.2603 | 0.0 | nan | 0.6850 | 0.7546 | 0.7750 | 0.4451 | 0.2827 | nan | 0.3049 | 0.4715 | 0.0 | 0.7694 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3810 | 0.0 | 0.0 | 0.6626 | 0.0 | 0.3832 | 0.4394 | 0.0 | nan | 0.0 | 0.2214 | 0.0 | 0.0 | 0.8125 | 0.6725 | 0.9138 | 0.0 | 0.1034 | 0.1797 | 0.0 |
| 0.3732 | 96.3 | 5200 | 0.7308 | 0.2840 | 0.3598 | 0.7795 | nan | 0.7534 | 0.7741 | 0.8137 | 0.9035 | 0.2614 | nan | 0.4308 | 0.6431 | 0.0 | 0.9315 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7293 | 0.0 | 0.0 | 0.8884 | 0.0 | 0.4166 | 0.5225 | 0.0 | nan | 0.0 | 0.3992 | 0.0 | 0.0 | 0.9329 | 0.8517 | 0.9519 | 0.0 | 0.0756 | 0.2354 | 0.0 | nan | 0.6723 | 0.6942 | 0.7836 | 0.3665 | 0.2474 | nan | 0.3333 | 0.4669 | 0.0 | 0.7857 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3545 | 0.0 | 0.0 | 0.6375 | 0.0 | 0.3443 | 0.4311 | 0.0 | nan | 0.0 | 0.2377 | 0.0 | 0.0 | 0.8346 | 0.7428 | 0.9173 | 0.0 | 0.0659 | 0.1722 | 0.0 |
| 0.3843 | 98.15 | 5300 | 0.6580 | 0.2864 | 0.3556 | 0.7962 | nan | 0.7254 | 0.8440 | 0.7996 | 0.8889 | 0.2696 | nan | 0.4320 | 0.6399 | 0.0 | 0.9285 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6708 | 0.0 | 0.0 | 0.8872 | 0.0 | 0.4070 | 0.5262 | 0.0 | nan | 0.0 | 0.3791 | 0.0 | 0.0 | 0.9423 | 0.7462 | 0.9487 | 0.0 | 0.1269 | 0.2159 | 0.0 | nan | 0.6660 | 0.7540 | 0.7836 | 0.4484 | 0.2521 | nan | 0.3307 | 0.4691 | 0.0 | 0.7963 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3896 | 0.0 | 0.0 | 0.6071 | 0.0 | 0.3185 | 0.4568 | 0.0 | nan | 0.0 | 0.2206 | 0.0 | 0.0 | 0.8138 | 0.6608 | 0.9170 | 0.0 | 0.1163 | 0.1644 | 0.0 |
| 0.3903 | 100.0 | 5400 | 0.6288 | 0.2881 | 0.3541 | 0.8086 | nan | 0.7763 | 0.8567 | 0.8240 | 0.8951 | 0.2446 | nan | 0.4334 | 0.5553 | 0.0 | 0.9354 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6738 | 0.0 | 0.0 | 0.8901 | 0.0 | 0.4777 | 0.5458 | 0.0 | nan | 0.0 | 0.3297 | 0.0 | 0.0 | 0.9417 | 0.7702 | 0.9457 | 0.0 | 0.0457 | 0.1907 | 0.0 | nan | 0.6906 | 0.7727 | 0.7923 | 0.4705 | 0.2358 | nan | 0.3295 | 0.4509 | 0.0 | 0.7755 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3981 | 0.0 | 0.0 | 0.6528 | 0.0 | 0.3644 | 0.4573 | 0.0 | nan | 0.0 | 0.2197 | 0.0 | 0.0 | 0.8176 | 0.6797 | 0.9157 | 0.0 | 0.0444 | 0.1500 | 0.0 |
| 0.355 | 101.85 | 5500 | 0.7112 | 0.2860 | 0.3563 | 0.7844 | nan | 0.7834 | 0.7947 | 0.8123 | 0.8807 | 0.2262 | nan | 0.3408 | 0.6020 | 0.0 | 0.9382 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6759 | 0.0 | 0.0 | 0.8838 | 0.0 | 0.4491 | 0.5845 | 0.0 | nan | 0.0 | 0.4029 | 0.0 | 0.0 | 0.9295 | 0.7890 | 0.9477 | 0.0 | 0.1045 | 0.2564 | 0.0 | nan | 0.7086 | 0.7078 | 0.7825 | 0.3607 | 0.2168 | nan | 0.2792 | 0.4624 | 0.0 | 0.7767 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4366 | 0.0 | 0.0 | 0.6667 | 0.0 | 0.3443 | 0.4351 | 0.0 | nan | 0.0 | 0.2386 | 0.0 | 0.0 | 0.8283 | 0.7060 | 0.9167 | 0.0 | 0.1000 | 0.1847 | 0.0 |
| 0.3729 | 103.7 | 5600 | 0.6849 | 0.2835 | 0.3591 | 0.7887 | nan | 0.8150 | 0.7790 | 0.8122 | 0.8834 | 0.2787 | nan | 0.4506 | 0.6270 | 0.0 | 0.9253 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7408 | 0.0 | 0.0 | 0.9180 | 0.0 | 0.3273 | 0.5197 | 0.0 | nan | 0.0 | 0.4167 | 0.0 | 0.0 | 0.9358 | 0.8379 | 0.9406 | 0.0 | 0.0480 | 0.2345 | 0.0 | nan | 0.6989 | 0.7189 | 0.7862 | 0.3939 | 0.2648 | nan | 0.3292 | 0.4851 | 0.0 | 0.7976 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3286 | 0.0 | 0.0 | 0.6202 | 0.0 | 0.2779 | 0.4371 | 0.0 | nan | 0.0 | 0.2402 | 0.0 | 0.0 | 0.8321 | 0.7297 | 0.9140 | 0.0 | 0.0437 | 0.1749 | 0.0 |
| 0.3895 | 105.56 | 5700 | 0.6917 | 0.2909 | 0.3669 | 0.7881 | nan | 0.8520 | 0.7575 | 0.8037 | 0.9006 | 0.2858 | nan | 0.4909 | 0.6331 | 0.0 | 0.9365 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6811 | 0.0 | 0.0 | 0.8525 | 0.0 | 0.5087 | 0.5374 | 0.0 | nan | 0.0 | 0.3766 | 0.0 | 0.0 | 0.9432 | 0.8426 | 0.9479 | 0.0 | 0.0982 | 0.2931 | 0.0 | nan | 0.7338 | 0.7000 | 0.7834 | 0.3764 | 0.2683 | nan | 0.3430 | 0.4719 | 0.0 | 0.7841 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3792 | 0.0 | 0.0 | 0.6627 | 0.0 | 0.3815 | 0.4454 | 0.0 | nan | 0.0 | 0.2245 | 0.0 | 0.0 | 0.8273 | 0.7311 | 0.9183 | 0.0 | 0.0894 | 0.1885 | 0.0 |
| 0.3602 | 107.41 | 5800 | 0.5475 | 0.3042 | 0.3685 | 0.8353 | nan | 0.7641 | 0.9319 | 0.8055 | 0.8737 | 0.3132 | nan | 0.4868 | 0.6244 | 0.0 | 0.9407 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6873 | 0.0 | 0.0 | 0.8810 | 0.0 | 0.4631 | 0.5387 | 0.0 | nan | 0.0 | 0.4382 | 0.0 | 0.0 | 0.9298 | 0.7866 | 0.9486 | 0.0 | 0.1344 | 0.2454 | 0.0 | nan | 0.7121 | 0.8270 | 0.7806 | 0.6491 | 0.2900 | nan | 0.3497 | 0.4700 | 0.0 | 0.7753 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4480 | 0.0 | 0.0 | 0.6577 | 0.0 | 0.3509 | 0.4582 | 0.0 | nan | 0.0 | 0.2281 | 0.0 | 0.0 | 0.8267 | 0.6946 | 0.9179 | 0.0 | 0.1213 | 0.1782 | 0.0 |
| 0.3674 | 109.26 | 5900 | 0.6421 | 0.2919 | 0.3540 | 0.8016 | nan | 0.6932 | 0.8577 | 0.8144 | 0.9018 | 0.3136 | nan | 0.3961 | 0.5655 | 0.0 | 0.9370 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6563 | 0.0 | 0.0 | 0.9140 | 0.0 | 0.3656 | 0.4891 | 0.0 | nan | 0.0 | 0.3775 | 0.0 | 0.0 | 0.9373 | 0.8204 | 0.9427 | 0.0 | 0.1378 | 0.2090 | 0.0 | nan | 0.6366 | 0.7503 | 0.7829 | 0.4541 | 0.2884 | nan | 0.3050 | 0.4442 | 0.0 | 0.7727 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4780 | 0.0 | 0.0 | 0.6644 | 0.0 | 0.3163 | 0.4511 | 0.0 | nan | 0.0 | 0.2316 | 0.0 | 0.0 | 0.8321 | 0.7257 | 0.9157 | 0.0 | 0.1268 | 0.1636 | 0.0 |
| 0.3657 | 111.11 | 6000 | 0.5813 | 0.2955 | 0.3637 | 0.8277 | nan | 0.7870 | 0.8975 | 0.7014 | 0.8566 | 0.3741 | nan | 0.4469 | 0.6219 | 0.0 | 0.9403 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7185 | 0.0 | 0.0 | 0.8827 | 0.0 | 0.4503 | 0.5681 | 0.0 | nan | 0.0 | 0.3815 | 0.0 | 0.0 | 0.9397 | 0.8275 | 0.9484 | 0.0 | 0.0968 | 0.1999 | 0.0 | nan | 0.7203 | 0.8097 | 0.6881 | 0.5693 | 0.3405 | nan | 0.3293 | 0.4754 | 0.0 | 0.7846 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3863 | 0.0 | 0.0 | 0.6346 | 0.0 | 0.3557 | 0.4385 | 0.0 | nan | 0.0 | 0.2181 | 0.0 | 0.0 | 0.8287 | 0.7172 | 0.9189 | 0.0 | 0.0846 | 0.1578 | 0.0 |
| 0.367 | 112.96 | 6100 | 0.6609 | 0.2897 | 0.3661 | 0.7984 | nan | 0.7903 | 0.8284 | 0.8039 | 0.9016 | 0.2212 | nan | 0.4163 | 0.6816 | 0.0 | 0.9453 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7209 | 0.0 | 0.0 | 0.8372 | 0.0 | 0.4577 | 0.5511 | 0.0 | nan | 0.0 | 0.4283 | 0.0 | 0.0 | 0.9390 | 0.7875 | 0.9493 | 0.0 | 0.1399 | 0.3157 | 0.0 | nan | 0.7203 | 0.7408 | 0.7738 | 0.4105 | 0.2117 | nan | 0.3182 | 0.4784 | 0.0 | 0.7828 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3859 | 0.0 | 0.0 | 0.6672 | 0.0 | 0.3588 | 0.4378 | 0.0 | nan | 0.0 | 0.2244 | 0.0 | 0.0 | 0.8282 | 0.7032 | 0.9187 | 0.0 | 0.1137 | 0.1958 | 0.0 |
| 0.3638 | 114.81 | 6200 | 0.7997 | 0.2803 | 0.3592 | 0.7547 | nan | 0.8092 | 0.6782 | 0.8102 | 0.9284 | 0.2905 | nan | 0.3691 | 0.6185 | 0.0 | 0.9403 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7520 | 0.0 | 0.0 | 0.8609 | 0.0 | 0.4178 | 0.5567 | 0.0 | nan | 0.0 | 0.3931 | 0.0 | 0.0 | 0.9474 | 0.8770 | 0.9435 | 0.0000 | 0.0667 | 0.2347 | 0.0 | nan | 0.7091 | 0.6261 | 0.7837 | 0.2942 | 0.2753 | nan | 0.2928 | 0.4552 | 0.0 | 0.7808 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3801 | 0.0 | 0.0 | 0.6648 | 0.0 | 0.3421 | 0.4315 | 0.0 | nan | 0.0 | 0.2152 | 0.0 | 0.0 | 0.8297 | 0.7448 | 0.9168 | 0.0000 | 0.0595 | 0.1680 | 0.0 |
| 0.3654 | 116.67 | 6300 | 0.6019 | 0.2956 | 0.3645 | 0.8175 | nan | 0.8244 | 0.8533 | 0.6788 | 0.8927 | 0.3058 | nan | 0.4950 | 0.6003 | 0.0 | 0.9396 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6930 | 0.0 | 0.0 | 0.8964 | 0.0 | 0.3647 | 0.5196 | 0.0 | nan | 0.0 | 0.4113 | 0.0 | 0.0 | 0.9257 | 0.8551 | 0.9594 | 0.0 | 0.1310 | 0.3167 | 0.0 | nan | 0.7337 | 0.7732 | 0.6601 | 0.4748 | 0.2853 | nan | 0.3520 | 0.4685 | 0.0 | 0.7868 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4121 | 0.0 | 0.0 | 0.6708 | 0.0 | 0.3117 | 0.4434 | 0.0 | nan | 0.0 | 0.2326 | 0.0 | 0.0 | 0.8405 | 0.7541 | 0.9187 | 0.0 | 0.1205 | 0.2201 | 0.0 |
| 0.3652 | 118.52 | 6400 | 0.5981 | 0.2967 | 0.3649 | 0.8205 | nan | 0.7551 | 0.8909 | 0.6342 | 0.9054 | 0.3093 | nan | 0.4234 | 0.6313 | 0.0 | 0.9387 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6751 | 0.0 | 0.0 | 0.8700 | 0.0 | 0.4187 | 0.5633 | 0.0 | nan | 0.0 | 0.4465 | 0.0 | 0.0 | 0.9262 | 0.8528 | 0.9534 | 0.0002 | 0.1437 | 0.3398 | 0.0 | nan | 0.6956 | 0.7948 | 0.6246 | 0.4963 | 0.2861 | nan | 0.3171 | 0.4870 | 0.0 | 0.7941 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4467 | 0.0 | 0.0 | 0.6719 | 0.0 | 0.3338 | 0.4473 | 0.0 | nan | 0.0 | 0.2377 | 0.0 | 0.0 | 0.8417 | 0.7531 | 0.9198 | 0.0002 | 0.1302 | 0.2180 | 0.0 |
| 0.3559 | 120.37 | 6500 | 0.5780 | 0.3026 | 0.3668 | 0.8256 | nan | 0.7517 | 0.9024 | 0.8103 | 0.8905 | 0.3788 | nan | 0.3990 | 0.5648 | 0.0 | 0.9522 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6491 | 0.0 | 0.0 | 0.8623 | 0.0 | 0.5208 | 0.5227 | 0.0 | nan | 0.0 | 0.4095 | 0.0 | 0.0 | 0.9315 | 0.8073 | 0.9531 | 0.0 | 0.1367 | 0.2937 | 0.0 | nan | 0.6917 | 0.8084 | 0.7831 | 0.5645 | 0.3365 | nan | 0.3195 | 0.4446 | 0.0 | 0.7603 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4620 | 0.0 | 0.0 | 0.6310 | 0.0 | 0.3859 | 0.4599 | 0.0 | nan | 0.0 | 0.2286 | 0.0 | 0.0 | 0.8329 | 0.7236 | 0.9192 | 0.0 | 0.1259 | 0.2064 | 0.0 |
| 0.3348 | 122.22 | 6600 | 0.5522 | 0.3023 | 0.3735 | 0.8379 | nan | 0.8289 | 0.9088 | 0.6882 | 0.8947 | 0.3594 | nan | 0.4373 | 0.6918 | 0.0 | 0.9448 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7098 | 0.0 | 0.0 | 0.8356 | 0.0 | 0.5156 | 0.5832 | 0.0 | nan | 0.0 | 0.4059 | 0.0 | 0.0 | 0.9417 | 0.8359 | 0.9578 | 0.0009 | 0.1308 | 0.2812 | 0.0 | nan | 0.7433 | 0.8257 | 0.6716 | 0.5930 | 0.3306 | nan | 0.3517 | 0.4956 | 0.0 | 0.7897 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3747 | 0.0 | 0.0 | 0.6736 | 0.0 | 0.3802 | 0.4271 | 0.0 | nan | 0.0 | 0.2180 | 0.0 | 0.0 | 0.8323 | 0.7373 | 0.9200 | 0.0008 | 0.1171 | 0.1906 | 0.0 |
| 0.3653 | 124.07 | 6700 | 0.6070 | 0.2986 | 0.3679 | 0.8216 | nan | 0.6919 | 0.9133 | 0.8114 | 0.8786 | 0.3306 | nan | 0.4558 | 0.6517 | 0.0 | 0.9455 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7183 | 0.0 | 0.0 | 0.8672 | 0.0 | 0.5019 | 0.5472 | 0.0 | nan | 0.0 | 0.4162 | 0.0 | 0.0 | 0.9390 | 0.8019 | 0.9414 | 0.0 | 0.0957 | 0.2664 | 0.0 | nan | 0.6394 | 0.8000 | 0.7821 | 0.6011 | 0.3025 | nan | 0.3359 | 0.4969 | 0.0 | 0.7887 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3803 | 0.0 | 0.0 | 0.6386 | 0.0 | 0.3855 | 0.4427 | 0.0 | nan | 0.0 | 0.2268 | 0.0 | 0.0 | 0.8298 | 0.7136 | 0.9170 | 0.0 | 0.0886 | 0.1861 | 0.0 |
| 0.3216 | 125.93 | 6800 | 0.6091 | 0.3003 | 0.3729 | 0.8176 | nan | 0.8300 | 0.8429 | 0.8233 | 0.9193 | 0.3587 | nan | 0.4900 | 0.6837 | 0.0 | 0.9439 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7272 | 0.0 | 0.0 | 0.8781 | 0.0 | 0.4143 | 0.5307 | 0.0 | nan | 0.0 | 0.4051 | 0.0116 | 0.0 | 0.9314 | 0.8400 | 0.9539 | 0.0 | 0.0921 | 0.2558 | 0.0 | nan | 0.7584 | 0.7706 | 0.7892 | 0.4626 | 0.3268 | nan | 0.3678 | 0.5054 | 0.0 | 0.7811 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3947 | 0.0 | 0.0 | 0.6604 | 0.0 | 0.3306 | 0.4515 | 0.0 | nan | 0.0 | 0.2265 | 0.0116 | 0.0 | 0.8386 | 0.7409 | 0.9204 | 0.0 | 0.0850 | 0.1887 | 0.0 |
| 0.358 | 127.78 | 6900 | 0.5287 | 0.3110 | 0.3729 | 0.8465 | nan | 0.8062 | 0.9359 | 0.8173 | 0.8927 | 0.3346 | nan | 0.4527 | 0.6392 | 0.0 | 0.9354 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6945 | 0.0 | 0.0 | 0.8722 | 0.0 | 0.4896 | 0.5317 | 0.0 | nan | 0.0 | 0.4070 | 0.0 | 0.0 | 0.9436 | 0.8467 | 0.9449 | 0.0 | 0.1243 | 0.2646 | 0.0 | nan | 0.7567 | 0.8356 | 0.7873 | 0.6388 | 0.3087 | nan | 0.3575 | 0.4948 | 0.0 | 0.7958 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4146 | 0.0 | 0.0 | 0.6798 | 0.0 | 0.3797 | 0.4630 | 0.0 | nan | 0.0 | 0.2283 | 0.0 | 0.0 | 0.8356 | 0.7467 | 0.9182 | 0.0 | 0.1175 | 0.1940 | 0.0 |
| 0.3402 | 129.63 | 7000 | 0.6208 | 0.2946 | 0.3637 | 0.8141 | nan | 0.7658 | 0.8754 | 0.8158 | 0.9118 | 0.2322 | nan | 0.4017 | 0.6637 | 0.0 | 0.9438 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6933 | 0.0 | 0.0 | 0.8763 | 0.0 | 0.3895 | 0.5601 | 0.0 | nan | 0.0 | 0.4252 | 0.0043 | 0.0 | 0.9423 | 0.7810 | 0.9448 | 0.0000 | 0.1253 | 0.2865 | 0.0 | nan | 0.7060 | 0.7779 | 0.7885 | 0.4813 | 0.2236 | nan | 0.3133 | 0.4921 | 0.0 | 0.7863 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4236 | 0.0 | 0.0 | 0.6817 | 0.0 | 0.3292 | 0.4440 | 0.0 | nan | 0.0 | 0.2236 | 0.0043 | 0.0 | 0.8247 | 0.6964 | 0.9178 | 0.0000 | 0.1163 | 0.1976 | 0.0 |
| 0.3218 | 131.48 | 7100 | 0.5444 | 0.3108 | 0.3748 | 0.8443 | nan | 0.8296 | 0.9244 | 0.8276 | 0.8878 | 0.2774 | nan | 0.4782 | 0.6750 | 0.0 | 0.9366 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6983 | 0.0 | 0.0 | 0.8664 | 0.0 | 0.4743 | 0.5451 | 0.0 | nan | 0.0 | 0.4187 | 0.0113 | 0.0 | 0.9391 | 0.8642 | 0.9558 | 0.0 | 0.1166 | 0.2684 | 0.0 | nan | 0.7636 | 0.8260 | 0.7984 | 0.6281 | 0.2647 | nan | 0.3705 | 0.5066 | 0.0 | 0.8001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4217 | 0.0 | 0.0 | 0.6783 | 0.0 | 0.3686 | 0.4581 | 0.0 | nan | 0.0 | 0.2178 | 0.0113 | 0.0 | 0.8396 | 0.7666 | 0.9213 | 0.0 | 0.1113 | 0.1943 | 0.0 |
| 0.3413 | 133.33 | 7200 | 0.5473 | 0.3063 | 0.3680 | 0.8412 | nan | 0.8038 | 0.9272 | 0.7396 | 0.8885 | 0.2742 | nan | 0.4489 | 0.5761 | 0.0 | 0.9434 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6970 | 0.0 | 0.0 | 0.8722 | 0.0 | 0.5185 | 0.5545 | 0.0 | nan | 0.0 | 0.4060 | 0.0241 | 0.0 | 0.9384 | 0.8611 | 0.9453 | 0.0 | 0.1082 | 0.2489 | 0.0 | nan | 0.7450 | 0.8245 | 0.7280 | 0.6104 | 0.2595 | nan | 0.3532 | 0.4660 | 0.0 | 0.7846 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4313 | 0.0 | 0.0 | 0.6807 | 0.0 | 0.3896 | 0.4684 | 0.0 | nan | 0.0 | 0.2284 | 0.0241 | 0.0 | 0.8397 | 0.7610 | 0.9186 | 0.0 | 0.1022 | 0.1871 | 0.0 |
| 0.3463 | 135.19 | 7300 | 0.6341 | 0.2922 | 0.3603 | 0.8106 | nan | 0.8087 | 0.8519 | 0.8052 | 0.9145 | 0.2425 | nan | 0.3711 | 0.5676 | 0.0 | 0.9336 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7046 | 0.0 | 0.0 | 0.8888 | 0.0 | 0.3923 | 0.5815 | 0.0 | nan | 0.0 | 0.4055 | 0.0319 | 0.0 | 0.9344 | 0.8036 | 0.9503 | 0.0 | 0.1152 | 0.2276 | 0.0 | nan | 0.7410 | 0.7674 | 0.7870 | 0.4522 | 0.2330 | nan | 0.3152 | 0.4495 | 0.0 | 0.7851 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4247 | 0.0 | 0.0 | 0.6553 | 0.0 | 0.3108 | 0.4330 | 0.0 | nan | 0.0 | 0.2290 | 0.0319 | 0.0 | 0.8273 | 0.7106 | 0.9198 | 0.0 | 0.1051 | 0.1720 | 0.0 |
| 0.317 | 137.04 | 7400 | 0.5689 | 0.2996 | 0.3673 | 0.8346 | nan | 0.8380 | 0.9048 | 0.7202 | 0.8874 | 0.2300 | nan | 0.4682 | 0.6001 | 0.0 | 0.9282 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7278 | 0.0 | 0.0 | 0.8811 | 0.0 | 0.4430 | 0.5714 | 0.0 | nan | 0.0 | 0.4115 | 0.0148 | 0.0 | 0.9311 | 0.8477 | 0.9517 | 0.0 | 0.1019 | 0.2961 | 0.0 | nan | 0.7600 | 0.8107 | 0.7092 | 0.5843 | 0.2243 | nan | 0.3634 | 0.4741 | 0.0 | 0.7839 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3683 | 0.0 | 0.0 | 0.6667 | 0.0 | 0.3433 | 0.4519 | 0.0 | nan | 0.0 | 0.2331 | 0.0148 | 0.0 | 0.8387 | 0.7448 | 0.9201 | 0.0 | 0.0930 | 0.2020 | 0.0 |
| 0.3241 | 138.89 | 7500 | 0.5921 | 0.3030 | 0.3698 | 0.8264 | nan | 0.7560 | 0.9038 | 0.8054 | 0.8993 | 0.2921 | nan | 0.4358 | 0.6497 | 0.0 | 0.9426 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6843 | 0.0 | 0.0 | 0.8596 | 0.0 | 0.4666 | 0.5531 | 0.0 | nan | 0.0014 | 0.4125 | 0.0280 | 0.0 | 0.9419 | 0.8345 | 0.9468 | 0.0005 | 0.1478 | 0.2726 | 0.0 | nan | 0.6935 | 0.8021 | 0.7869 | 0.5437 | 0.2719 | nan | 0.3428 | 0.4933 | 0.0 | 0.7917 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4134 | 0.0 | 0.0 | 0.6707 | 0.0 | 0.3632 | 0.4528 | 0.0 | nan | 0.0014 | 0.2150 | 0.0280 | 0.0 | 0.8367 | 0.7422 | 0.9203 | 0.0005 | 0.1346 | 0.1914 | 0.0 |
| 0.3341 | 140.74 | 7600 | 0.5641 | 0.3038 | 0.3702 | 0.8325 | nan | 0.7624 | 0.9172 | 0.8114 | 0.8959 | 0.2940 | nan | 0.5063 | 0.6105 | 0.0 | 0.9434 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7179 | 0.0 | 0.0 | 0.8732 | 0.0 | 0.5230 | 0.5420 | 0.0 | nan | 0.0 | 0.4148 | 0.0425 | 0.0 | 0.9411 | 0.7719 | 0.9528 | 0.0 | 0.0840 | 0.2431 | 0.0 | nan | 0.7064 | 0.8174 | 0.7877 | 0.6132 | 0.2760 | nan | 0.3594 | 0.4823 | 0.0 | 0.7859 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4116 | 0.0 | 0.0 | 0.6715 | 0.0 | 0.3953 | 0.4613 | 0.0 | nan | 0.0 | 0.2236 | 0.0425 | 0.0 | 0.8241 | 0.6840 | 0.9219 | 0.0 | 0.0790 | 0.1794 | 0.0 |
| 0.3135 | 142.59 | 7700 | 0.5712 | 0.3062 | 0.3709 | 0.8300 | nan | 0.7952 | 0.8986 | 0.8100 | 0.8619 | 0.3084 | nan | 0.4715 | 0.6006 | 0.0 | 0.9439 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6837 | 0.0 | 0.0 | 0.8669 | 0.0 | 0.5083 | 0.5475 | 0.0 | nan | 0.0 | 0.4053 | 0.0384 | 0.0 | 0.9443 | 0.8124 | 0.9524 | 0.0 | 0.1181 | 0.3029 | 0.0 | nan | 0.7270 | 0.8042 | 0.7907 | 0.5385 | 0.2877 | nan | 0.3610 | 0.4689 | 0.0 | 0.7784 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4431 | 0.0 | 0.0 | 0.6764 | 0.0 | 0.3905 | 0.4659 | 0.0 | nan | 0.0 | 0.2280 | 0.0384 | 0.0 | 0.8312 | 0.7224 | 0.9227 | 0.0 | 0.1114 | 0.2117 | 0.0 |
| 0.2985 | 144.44 | 7800 | 0.5705 | 0.3063 | 0.3739 | 0.8331 | nan | 0.7844 | 0.9061 | 0.8011 | 0.8987 | 0.3105 | nan | 0.4674 | 0.6336 | 0.0 | 0.9448 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7174 | 0.0 | 0.0 | 0.8645 | 0.0 | 0.4836 | 0.5414 | 0.0 | nan | 0.0 | 0.4277 | 0.0445 | 0.0 | 0.9390 | 0.8448 | 0.9518 | 0.0003 | 0.1004 | 0.3014 | 0.0 | nan | 0.7238 | 0.8110 | 0.7871 | 0.5506 | 0.2869 | nan | 0.3545 | 0.4901 | 0.0 | 0.7879 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4047 | 0.0 | 0.0 | 0.6872 | 0.0 | 0.3776 | 0.4572 | 0.0 | nan | 0.0 | 0.2263 | 0.0445 | 0.0 | 0.8392 | 0.7464 | 0.9226 | 0.0003 | 0.0950 | 0.2101 | 0.0 |
| 0.3083 | 146.3 | 7900 | 0.6255 | 0.3029 | 0.3735 | 0.8173 | nan | 0.7919 | 0.8576 | 0.8118 | 0.9101 | 0.3017 | nan | 0.4374 | 0.6462 | 0.0 | 0.9461 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7137 | 0.0 | 0.0 | 0.8706 | 0.0 | 0.5111 | 0.5445 | 0.0 | nan | 0.0001 | 0.4282 | 0.0589 | 0.0 | 0.9317 | 0.8537 | 0.9628 | 0.0000 | 0.1030 | 0.2713 | 0.0 | nan | 0.7389 | 0.7675 | 0.7857 | 0.4623 | 0.2774 | nan | 0.3477 | 0.4815 | 0.0 | 0.7777 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4220 | 0.0 | 0.0 | 0.6797 | 0.0 | 0.3926 | 0.4652 | 0.0 | nan | 0.0001 | 0.2292 | 0.0588 | 0.0 | 0.8421 | 0.7549 | 0.9219 | 0.0000 | 0.0939 | 0.1926 | 0.0 |
| 0.3132 | 148.15 | 8000 | 0.6407 | 0.2987 | 0.3697 | 0.8084 | nan | 0.8056 | 0.8366 | 0.8045 | 0.9187 | 0.2881 | nan | 0.3901 | 0.6494 | 0.0 | 0.9456 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7065 | 0.0 | 0.0 | 0.8674 | 0.0 | 0.4835 | 0.5578 | 0.0 | nan | 0.0 | 0.4107 | 0.0690 | 0.0 | 0.9364 | 0.8069 | 0.9579 | 0.0 | 0.1392 | 0.2549 | 0.0 | nan | 0.7400 | 0.7511 | 0.7860 | 0.4288 | 0.2705 | nan | 0.3211 | 0.4907 | 0.0 | 0.7845 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4064 | 0.0 | 0.0 | 0.6776 | 0.0 | 0.3750 | 0.4463 | 0.0 | nan | 0.0 | 0.2323 | 0.0689 | 0.0 | 0.8346 | 0.7221 | 0.9215 | 0.0 | 0.1189 | 0.1827 | 0.0 |
| 0.3227 | 150.0 | 8100 | 0.6215 | 0.3010 | 0.3747 | 0.8154 | nan | 0.8072 | 0.8523 | 0.7987 | 0.9122 | 0.3387 | nan | 0.4049 | 0.6521 | 0.0 | 0.9464 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7268 | 0.0 | 0.0 | 0.8526 | 0.0 | 0.5301 | 0.5632 | 0.0 | nan | 0.0015 | 0.4353 | 0.0597 | 0.0 | 0.9352 | 0.8036 | 0.9574 | 0.0 | 0.1202 | 0.2916 | 0.0 | nan | 0.7319 | 0.7712 | 0.7839 | 0.4639 | 0.3115 | nan | 0.3235 | 0.4815 | 0.0 | 0.7813 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3954 | 0.0 | 0.0 | 0.6800 | 0.0 | 0.3930 | 0.4522 | 0.0 | nan | 0.0015 | 0.2349 | 0.0596 | 0.0 | 0.8319 | 0.7106 | 0.9225 | 0.0 | 0.1071 | 0.1947 | 0.0 |
| 0.3041 | 151.85 | 8200 | 0.6365 | 0.2982 | 0.3695 | 0.8091 | nan | 0.7813 | 0.8516 | 0.8100 | 0.9057 | 0.2989 | nan | 0.4138 | 0.6557 | 0.0 | 0.9422 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7155 | 0.0 | 0.0 | 0.8717 | 0.0 | 0.5273 | 0.5454 | 0.0 | nan | 0.0 | 0.4293 | 0.0595 | 0.0 | 0.9354 | 0.7484 | 0.9557 | 0.0 | 0.1301 | 0.2483 | 0.0 | nan | 0.7117 | 0.7612 | 0.7891 | 0.4543 | 0.2787 | nan | 0.3305 | 0.4950 | 0.0 | 0.7874 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4007 | 0.0 | 0.0 | 0.6772 | 0.0 | 0.3923 | 0.4632 | 0.0 | nan | 0.0 | 0.2342 | 0.0594 | 0.0 | 0.8230 | 0.6691 | 0.9227 | 0.0 | 0.1142 | 0.1800 | 0.0 |
| 0.3295 | 153.7 | 8300 | 0.5763 | 0.3064 | 0.3745 | 0.8319 | nan | 0.8091 | 0.9000 | 0.8155 | 0.8927 | 0.3048 | nan | 0.4385 | 0.6734 | 0.0 | 0.9391 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7114 | 0.0 | 0.0 | 0.8707 | 0.0 | 0.4884 | 0.5694 | 0.0 | nan | 0.0032 | 0.4179 | 0.0581 | 0.0 | 0.9385 | 0.8107 | 0.9552 | 0.0006 | 0.1316 | 0.2550 | 0.0 | nan | 0.7460 | 0.8059 | 0.7926 | 0.5582 | 0.2844 | nan | 0.3545 | 0.5009 | 0.0 | 0.7892 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4184 | 0.0 | 0.0 | 0.6741 | 0.0 | 0.3769 | 0.4455 | 0.0 | nan | 0.0032 | 0.2317 | 0.0581 | 0.0 | 0.8317 | 0.7120 | 0.9232 | 0.0005 | 0.1162 | 0.1807 | 0.0 |
| 0.3057 | 155.56 | 8400 | 0.6602 | 0.2967 | 0.3669 | 0.8053 | nan | 0.7862 | 0.8400 | 0.8012 | 0.9083 | 0.2761 | nan | 0.3977 | 0.6548 | 0.0 | 0.9399 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7262 | 0.0 | 0.0 | 0.8830 | 0.0 | 0.4582 | 0.5390 | 0.0 | nan | 0.0 | 0.4382 | 0.0696 | 0.0 | 0.9380 | 0.7676 | 0.9517 | 0.0 | 0.1204 | 0.2454 | 0.0 | nan | 0.7257 | 0.7493 | 0.7832 | 0.4331 | 0.2603 | nan | 0.3344 | 0.4909 | 0.0 | 0.7899 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4164 | 0.0 | 0.0 | 0.6631 | 0.0 | 0.3619 | 0.4610 | 0.0 | nan | 0.0 | 0.2358 | 0.0695 | 0.0 | 0.8268 | 0.6858 | 0.9224 | 0.0 | 0.1038 | 0.1798 | 0.0 |
| 0.3152 | 157.41 | 8500 | 0.6195 | 0.2986 | 0.3661 | 0.8115 | nan | 0.7876 | 0.8570 | 0.7994 | 0.8920 | 0.2891 | nan | 0.4035 | 0.6056 | 0.0 | 0.9417 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7090 | 0.0 | 0.0 | 0.8719 | 0.0 | 0.4959 | 0.5413 | 0.0 | nan | 0.0 | 0.4136 | 0.0566 | 0.0 | 0.9414 | 0.7717 | 0.9517 | 0.0 | 0.1198 | 0.2672 | 0.0 | nan | 0.7263 | 0.7633 | 0.7814 | 0.4550 | 0.2715 | nan | 0.3352 | 0.4721 | 0.0 | 0.7820 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4233 | 0.0 | 0.0 | 0.6671 | 0.0 | 0.3757 | 0.4677 | 0.0 | nan | 0.0 | 0.2407 | 0.0565 | 0.0 | 0.8255 | 0.6891 | 0.9216 | 0.0 | 0.1083 | 0.1912 | 0.0 |
| 0.3041 | 159.26 | 8600 | 0.5761 | 0.3071 | 0.3735 | 0.8297 | nan | 0.8077 | 0.8910 | 0.8053 | 0.8839 | 0.3353 | nan | 0.4603 | 0.6015 | 0.0 | 0.9489 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6966 | 0.0 | 0.0 | 0.8701 | 0.0 | 0.4933 | 0.5427 | 0.0 | nan | 0.0082 | 0.4481 | 0.0761 | 0.0 | 0.9301 | 0.8454 | 0.9544 | 0.0005 | 0.1062 | 0.2469 | 0.0 | nan | 0.7406 | 0.7982 | 0.7855 | 0.5184 | 0.3024 | nan | 0.3652 | 0.4669 | 0.0 | 0.7807 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4413 | 0.0 | 0.0 | 0.6853 | 0.0 | 0.3815 | 0.4553 | 0.0 | nan | 0.0082 | 0.2312 | 0.0759 | 0.0 | 0.8414 | 0.7507 | 0.9229 | 0.0005 | 0.0961 | 0.1775 | 0.0 |
| 0.3185 | 161.11 | 8700 | 0.5760 | 0.3058 | 0.3698 | 0.8296 | nan | 0.8094 | 0.8946 | 0.7956 | 0.8887 | 0.2897 | nan | 0.4223 | 0.5895 | 0.0 | 0.9357 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6889 | 0.0 | 0.0 | 0.8908 | 0.0 | 0.4640 | 0.5538 | 0.0 | nan | 0.0 | 0.4239 | 0.0692 | 0.0 | 0.9305 | 0.8418 | 0.9519 | 0.0001 | 0.1431 | 0.2510 | 0.0 | nan | 0.7455 | 0.7997 | 0.7789 | 0.5321 | 0.2717 | nan | 0.3473 | 0.4756 | 0.0 | 0.8013 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4311 | 0.0 | 0.0 | 0.6576 | 0.0 | 0.3605 | 0.4511 | 0.0 | nan | 0.0 | 0.2412 | 0.0691 | 0.0 | 0.8410 | 0.7459 | 0.9223 | 0.0001 | 0.1284 | 0.1839 | 0.0 |
| 0.2908 | 162.96 | 8800 | 0.5655 | 0.3075 | 0.3717 | 0.8316 | nan | 0.8548 | 0.8841 | 0.7997 | 0.8745 | 0.3118 | nan | 0.4610 | 0.6024 | 0.0 | 0.9410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6931 | 0.0 | 0.0 | 0.8861 | 0.0 | 0.4534 | 0.5383 | 0.0 | nan | 0.0015 | 0.4266 | 0.0689 | 0.0 | 0.9366 | 0.8053 | 0.9554 | 0.0 | 0.1346 | 0.2641 | 0.0 | nan | 0.7595 | 0.8021 | 0.7817 | 0.5396 | 0.2919 | nan | 0.3717 | 0.4720 | 0.0 | 0.7905 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4462 | 0.0 | 0.0 | 0.6634 | 0.0 | 0.3562 | 0.4639 | 0.0 | nan | 0.0015 | 0.2393 | 0.0688 | 0.0 | 0.8346 | 0.7212 | 0.9232 | 0.0 | 0.1193 | 0.1923 | 0.0 |
| 0.3137 | 164.81 | 8900 | 0.5829 | 0.3094 | 0.3784 | 0.8279 | nan | 0.8476 | 0.8674 | 0.8118 | 0.9018 | 0.3237 | nan | 0.4801 | 0.6610 | 0.0 | 0.9387 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6851 | 0.0 | 0.0 | 0.8696 | 0.0 | 0.5109 | 0.5681 | 0.0 | nan | 0.0260 | 0.4276 | 0.0709 | 0.0 | 0.9330 | 0.8416 | 0.9554 | 0.0012 | 0.1333 | 0.2547 | 0.0 | nan | 0.7562 | 0.7893 | 0.7902 | 0.5123 | 0.3055 | nan | 0.3768 | 0.4921 | 0.0 | 0.7978 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 0.0 | 0.0 | 0.6754 | 0.0 | 0.3867 | 0.4408 | 0.0 | nan | 0.0260 | 0.2316 | 0.0708 | 0.0 | 0.8396 | 0.7418 | 0.9237 | 0.0010 | 0.1173 | 0.1797 | 0.0 |
| 0.3219 | 166.67 | 9000 | 0.5812 | 0.3065 | 0.3750 | 0.8278 | nan | 0.8354 | 0.8788 | 0.8041 | 0.8834 | 0.2990 | nan | 0.4594 | 0.6655 | 0.0 | 0.9395 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6980 | 0.0 | 0.0 | 0.8601 | 0.0 | 0.5069 | 0.5685 | 0.0 | nan | 0.0113 | 0.4156 | 0.0664 | 0.0 | 0.9440 | 0.8108 | 0.9521 | 0.0001 | 0.1291 | 0.2716 | 0.0 | nan | 0.7565 | 0.7902 | 0.7828 | 0.5219 | 0.2845 | nan | 0.3688 | 0.4922 | 0.0 | 0.7966 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4240 | 0.0 | 0.0 | 0.6768 | 0.0 | 0.3877 | 0.4481 | 0.0 | nan | 0.0113 | 0.2327 | 0.0664 | 0.0 | 0.8308 | 0.7154 | 0.9230 | 0.0001 | 0.1124 | 0.1869 | 0.0 |
| 0.3181 | 168.52 | 9100 | 0.5632 | 0.3112 | 0.3765 | 0.8367 | nan | 0.8125 | 0.9072 | 0.8124 | 0.8963 | 0.3044 | nan | 0.4647 | 0.6697 | 0.0 | 0.9359 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6879 | 0.0 | 0.0 | 0.8771 | 0.0 | 0.5085 | 0.5560 | 0.0 | nan | 0.0039 | 0.4244 | 0.0703 | 0.0 | 0.9367 | 0.8280 | 0.9532 | 0.0 | 0.1309 | 0.2672 | 0.0 | nan | 0.7474 | 0.8113 | 0.7892 | 0.5707 | 0.2882 | nan | 0.3704 | 0.5031 | 0.0 | 0.7988 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4314 | 0.0 | 0.0 | 0.6778 | 0.0 | 0.3900 | 0.4604 | 0.0 | nan | 0.0039 | 0.2372 | 0.0702 | 0.0 | 0.8390 | 0.7407 | 0.9234 | 0.0 | 0.1173 | 0.1872 | 0.0 |
| 0.3009 | 170.37 | 9200 | 0.5671 | 0.3095 | 0.3743 | 0.8326 | nan | 0.7939 | 0.9018 | 0.7926 | 0.8902 | 0.3160 | nan | 0.4603 | 0.6415 | 0.0 | 0.9414 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6804 | 0.0 | 0.0 | 0.8815 | 0.0 | 0.4974 | 0.5528 | 0.0 | nan | 0.0000 | 0.4233 | 0.0749 | 0.0 | 0.9339 | 0.8322 | 0.9566 | 0.0 | 0.1296 | 0.2770 | 0.0 | nan | 0.7279 | 0.8041 | 0.7736 | 0.5652 | 0.2951 | nan | 0.3698 | 0.4960 | 0.0 | 0.7938 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4395 | 0.0 | 0.0 | 0.6714 | 0.0 | 0.3837 | 0.4627 | 0.0 | nan | 0.0000 | 0.2368 | 0.0747 | 0.0 | 0.8379 | 0.7389 | 0.9235 | 0.0 | 0.1161 | 0.1946 | 0.0 |
| 0.2873 | 172.22 | 9300 | 0.6113 | 0.3047 | 0.3720 | 0.8176 | nan | 0.8107 | 0.8536 | 0.7603 | 0.8949 | 0.3232 | nan | 0.4761 | 0.6422 | 0.0 | 0.9415 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6799 | 0.0 | 0.0 | 0.8720 | 0.0 | 0.5023 | 0.5457 | 0.0 | nan | 0.0034 | 0.4146 | 0.0717 | 0.0 | 0.9439 | 0.8035 | 0.9521 | 0.0 | 0.1299 | 0.2839 | 0.0 | nan | 0.7355 | 0.7675 | 0.7422 | 0.4826 | 0.3027 | nan | 0.3715 | 0.4933 | 0.0 | 0.7896 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4421 | 0.0 | 0.0 | 0.6666 | 0.0 | 0.3881 | 0.4723 | 0.0 | nan | 0.0034 | 0.2350 | 0.0716 | 0.0 | 0.8305 | 0.7183 | 0.9229 | 0.0 | 0.1152 | 0.1992 | 0.0 |
| 0.2856 | 174.07 | 9400 | 0.6091 | 0.3045 | 0.3713 | 0.8183 | nan | 0.8177 | 0.8508 | 0.7884 | 0.9070 | 0.3274 | nan | 0.4412 | 0.5971 | 0.0 | 0.9437 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6904 | 0.0 | 0.0 | 0.8760 | 0.0 | 0.5037 | 0.5471 | 0.0 | nan | 0.0023 | 0.4093 | 0.0729 | 0.0 | 0.9395 | 0.8289 | 0.9513 | 0.0000 | 0.1123 | 0.2745 | 0.0 | nan | 0.7401 | 0.7694 | 0.7705 | 0.4745 | 0.3070 | nan | 0.3570 | 0.4797 | 0.0 | 0.7901 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4370 | 0.0 | 0.0 | 0.6642 | 0.0 | 0.3879 | 0.4663 | 0.0 | nan | 0.0023 | 0.2356 | 0.0728 | 0.0 | 0.8358 | 0.7333 | 0.9230 | 0.0000 | 0.1034 | 0.1937 | 0.0 |
| 0.2803 | 175.93 | 9500 | 0.6404 | 0.3009 | 0.3704 | 0.8084 | nan | 0.8365 | 0.8208 | 0.7833 | 0.9062 | 0.3050 | nan | 0.4405 | 0.6203 | 0.0 | 0.9443 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6940 | 0.0 | 0.0 | 0.8667 | 0.0 | 0.5055 | 0.5494 | 0.0 | nan | 0.0084 | 0.4148 | 0.0772 | 0.0 | 0.9424 | 0.8074 | 0.9551 | 0.0001 | 0.1077 | 0.2664 | 0.0 | nan | 0.7454 | 0.7459 | 0.7680 | 0.4316 | 0.2897 | nan | 0.3571 | 0.4866 | 0.0 | 0.7930 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4255 | 0.0 | 0.0 | 0.6652 | 0.0 | 0.3877 | 0.4601 | 0.0 | nan | 0.0084 | 0.2306 | 0.0771 | 0.0 | 0.8314 | 0.7178 | 0.9235 | 0.0001 | 0.0969 | 0.1889 | 0.0 |
| 0.2924 | 177.78 | 9600 | 0.6156 | 0.3045 | 0.3723 | 0.8156 | nan | 0.8293 | 0.8420 | 0.8051 | 0.8964 | 0.3365 | nan | 0.4651 | 0.6281 | 0.0 | 0.9443 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6806 | 0.0 | 0.0 | 0.8777 | 0.0 | 0.4957 | 0.5434 | 0.0 | nan | 0.0043 | 0.4293 | 0.0774 | 0.0 | 0.9387 | 0.7942 | 0.9562 | 0.0 | 0.1178 | 0.2514 | 0.0 | nan | 0.7508 | 0.7606 | 0.7848 | 0.4617 | 0.3134 | nan | 0.3712 | 0.4903 | 0.0 | 0.7912 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4384 | 0.0 | 0.0 | 0.6666 | 0.0 | 0.3850 | 0.4648 | 0.0 | nan | 0.0043 | 0.2308 | 0.0773 | 0.0 | 0.8320 | 0.7126 | 0.9232 | 0.0 | 0.1028 | 0.1836 | 0.0 |
| 0.2911 | 179.63 | 9700 | 0.6039 | 0.3051 | 0.3743 | 0.8197 | nan | 0.8161 | 0.8573 | 0.8009 | 0.9013 | 0.3091 | nan | 0.4597 | 0.6407 | 0.0 | 0.9406 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7191 | 0.0 | 0.0 | 0.8787 | 0.0 | 0.5007 | 0.5561 | 0.0 | nan | 0.0046 | 0.4187 | 0.0825 | 0.0 | 0.9325 | 0.8335 | 0.9578 | 0.0000 | 0.1036 | 0.2642 | 0.0 | nan | 0.7434 | 0.7687 | 0.7825 | 0.4751 | 0.2917 | nan | 0.3667 | 0.4994 | 0.0 | 0.7998 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4127 | 0.0 | 0.0 | 0.6761 | 0.0 | 0.3878 | 0.4561 | 0.0 | nan | 0.0046 | 0.2352 | 0.0823 | 0.0 | 0.8393 | 0.7401 | 0.9235 | 0.0000 | 0.0883 | 0.1885 | 0.0 |
| 0.3093 | 181.48 | 9800 | 0.6244 | 0.3021 | 0.3707 | 0.8132 | nan | 0.8240 | 0.8367 | 0.7819 | 0.9031 | 0.3158 | nan | 0.4523 | 0.6336 | 0.0 | 0.9419 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7047 | 0.0 | 0.0 | 0.8782 | 0.0 | 0.5024 | 0.5478 | 0.0 | nan | 0.0 | 0.4039 | 0.0761 | 0.0 | 0.9422 | 0.8036 | 0.9524 | 0.0 | 0.0992 | 0.2629 | 0.0 | nan | 0.7414 | 0.7575 | 0.7666 | 0.4537 | 0.2990 | nan | 0.3642 | 0.4913 | 0.0 | 0.7906 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4261 | 0.0 | 0.0 | 0.6655 | 0.0 | 0.3892 | 0.4639 | 0.0 | nan | 0.0 | 0.2339 | 0.0760 | 0.0 | 0.8311 | 0.7168 | 0.9226 | 0.0 | 0.0873 | 0.1892 | 0.0 |
| 0.3194 | 183.33 | 9900 | 0.6384 | 0.3015 | 0.3707 | 0.8106 | nan | 0.8269 | 0.8295 | 0.7809 | 0.9036 | 0.3169 | nan | 0.4373 | 0.6407 | 0.0 | 0.9394 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7004 | 0.0 | 0.0 | 0.8774 | 0.0 | 0.4936 | 0.5511 | 0.0 | nan | 0.0004 | 0.4210 | 0.0726 | 0.0 | 0.9434 | 0.8072 | 0.9462 | 0.0 | 0.1149 | 0.2605 | 0.0 | nan | 0.7423 | 0.7508 | 0.7639 | 0.4418 | 0.2988 | nan | 0.3584 | 0.4963 | 0.0 | 0.7976 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4212 | 0.0 | 0.0 | 0.6662 | 0.0 | 0.3830 | 0.4618 | 0.0 | nan | 0.0004 | 0.2347 | 0.0725 | 0.0 | 0.8311 | 0.7208 | 0.9214 | 0.0 | 0.0993 | 0.1875 | 0.0 |
| 0.3174 | 185.19 | 10000 | 0.6350 | 0.3022 | 0.3724 | 0.8117 | nan | 0.8240 | 0.8308 | 0.7789 | 0.9052 | 0.3152 | nan | 0.4703 | 0.6444 | 0.0 | 0.9424 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7116 | 0.0 | 0.0 | 0.8716 | 0.0 | 0.4736 | 0.5408 | 0.0 | nan | 0.0048 | 0.4202 | 0.0754 | 0.0 | 0.9437 | 0.8196 | 0.9525 | 0.0 | 0.1041 | 0.2872 | 0.0 | nan | 0.7413 | 0.7520 | 0.7629 | 0.4453 | 0.2976 | nan | 0.3701 | 0.4953 | 0.0 | 0.7962 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4152 | 0.0 | 0.0 | 0.6712 | 0.0 | 0.3749 | 0.4613 | 0.0 | nan | 0.0048 | 0.2337 | 0.0753 | 0.0 | 0.8324 | 0.7277 | 0.9234 | 0.0 | 0.0913 | 0.1997 | 0.0 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
PDM/finetuning-sentiment-model-3000-samples | aa5d883f2dfbbff2b2b15e739a6902fe5f9fac98 | 2022-04-22T09:18:16.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | PDM | null | PDM/finetuning-sentiment-model-3000-samples | 12 | null | transformers | 10,740 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.8741721854304636
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3061
- Accuracy: 0.8733
- F1: 0.8742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
PaulTran/vietnamese_essay_identify | 4495aeb914bef2688cae4114a1912a4b7d249c79 | 2022-06-15T12:05:28.000Z | [
"pytorch",
"roberta",
"text-classification",
"vi",
"Vietnamese",
"arxiv:2003.00744",
"transformers",
"essay category"
]
| text-classification | false | PaulTran | null | PaulTran/vietnamese_essay_identify | 12 | null | transformers | 10,741 | ---
language:
- vi
- Vietnamese
tags:
- essay category
- text-classification
widget:
- text: "Cái đồng hồ của em cao hơn 30 cm. Đế của nó được làm bằng i-nốc sáng loáng hình bầu dục. Chỗ dài nhất của đế vừa bằng gang tay của em. Chỗ rộng nhất bằng hơn nửa gang tay."
example_title: "Descriptive - Miêu tả"
- text: "Hiện nay, đại dịch Covid-19 diễn biến ngày một phức tạp, nó khiến nền kinh tế trì trệ, cuộc sống con người hoàn toàn xáo trộn và luôn ở trạng thái lo ngại... và cùng với đó chính là việc học sinh - sinh viên không thể tới trường. Một trong những điều đáng lo ngại nhất khi tình hình dịch bệnh không biết bao giờ mới ổn định."
example_title: "Argumentative - Nghị luận"
- text: "Cấu tạo của chiếc kính gồm hai bộ phận chính là gọng kính và mắt kính. Gọng kính được làm bằng nhựa cao cấp hoặc kim loại quý. Gọng kính chia làm hai phần: phần khung để lắp mắt kính và phần gọng để đeo vào tai, nối với nhau bởi các ốc vít nhỏ, có thể mở ra, gập lại dễ dàng. Chất liệu để làm mắt kính là nhựa hoặc thủy tinh trong suốt. Gọng kính và mắt kính có nhiều hình dáng, màu sắc khác nhau."
example_title: "Expository - Thuyết minh"
- text: "Em yêu quý đào vì nó là loài cây đặc trưng của miền Bắc vào Tết đến xuân sang. Đào bình dị nhưng gắn liền với tuổi thơ em nồng nàn. Tuổi thơ đã từng khao khát nhà có một cây đào mộc mạc để háo hức vui tươi trong ngày Tết."
example_title: "Expressive - Biểu cảm"
- text: "Hắn vừa đi vừa chửi. Bao giờ cũng thế, cứ rượu xong là hắn chửi. Bắt đầu chửi trời, có hề gì? Trời có của riêng nhà nào? Rồi hắn chửi đời. Thế cũng chẳng sao: Đời là tất cả nhưng cũng chẳng là ai."
example_title: "Narrative - Tự sự"
---
This is a finetuned PhoBERT model for essay categories classification.
- At primary levels of education in Vietnam, students are introduced to 5 categories of essays:
- Argumentative - Nghị luận
- Expressive - Biểu cảm
- Descriptive - Miêu tả
- Narrative - Tự sự
- Expository - Thuyết minh
- This model will classify sentences into these 5 categories
The general architecture and experimental results of PhoBERT can be found in EMNLP-2020 Findings [paper](https://arxiv.org/abs/2003.00744):
@article{phobert,
title = {{PhoBERT: Pre-trained language models for Vietnamese}},
author = {Dat Quoc Nguyen and Anh Tuan Nguyen},
journal = {Findings of EMNLP},
year = {2020}
} |
praf-choub/bart-CaPE-xsum | 5ea01de016ebaa55b238e2e27a1e3b5c94d26acd | 2022-06-14T04:51:24.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:xsum",
"arxiv:2110.07166",
"transformers",
"summarization",
"license:bsd-3-clause",
"autotrain_compatible"
]
| summarization | false | praf-choub | null | praf-choub/bart-CaPE-xsum | 12 | null | transformers | 10,742 | ---
language: en
tags:
- summarization
license: bsd-3-clause
datasets:
- xsum
---
Citation
```
@misc{https://doi.org/10.48550/arxiv.2110.07166,
doi = {10.48550/ARXIV.2110.07166},
url = {https://arxiv.org/abs/2110.07166},
author = {Choubey, Prafulla Kumar and Fabbri, Alexander R. and Vig, Jesse and Wu, Chien-Sheng and Liu, Wenhao and Rajani, Nazneen Fatema},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {CaPE: Contrastive Parameter Ensembling for Reducing Hallucination in Abstractive Summarization},
publisher = {arXiv},
year = {2021},
copyright = {Creative Commons Attribution 4.0 International}
}
``` |
Hate-speech-CNERG/marathi-codemixed-abusive-MuRIL | dedc74530ccdf1ca44c4d5d71b649813c578c499 | 2022-05-03T08:45:38.000Z | [
"pytorch",
"bert",
"text-classification",
"mr",
"arxiv:2204.12543",
"transformers",
"license:afl-3.0"
]
| text-classification | false | Hate-speech-CNERG | null | Hate-speech-CNERG/marathi-codemixed-abusive-MuRIL | 12 | null | transformers | 10,743 | ---
language: mr
license: afl-3.0
---
This model is used to detect **abusive speech** in **Marathi**. It is finetuned on MuRIL model using Marathi abusive speech dataset.
The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive)
LABEL_0 :-> Normal
LABEL_1 :-> Abusive
### For more details about our paper
Mithun Das, Somnath Banerjee and Animesh Mukherjee. "[Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages](https://arxiv.org/abs/2204.12543)". Accepted at ACM HT 2022.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{das2022data,
title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages},
author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2204.12543},
year={2022}
}
~~~ |
AlexTaylor/distilbert-base-uncased-finetuned-emotion | 1ff60c79ed3f5dc8b645a988389d05f79d3451b7 | 2022-04-25T13:24:10.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | AlexTaylor | null | AlexTaylor/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,744 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9263429084864518
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2257
- Accuracy: 0.926
- F1: 0.9263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8433 | 1.0 | 250 | 0.3243 | 0.9035 | 0.8996 |
| 0.2583 | 2.0 | 500 | 0.2257 | 0.926 | 0.9263 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
bdickson/electra-small-discriminator-finetuned-squad | b17c8792162fb86558192851a25757c17af5048b | 2022-04-28T03:39:47.000Z | [
"pytorch",
"tensorboard",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | bdickson | null | bdickson/electra-small-discriminator-finetuned-squad | 12 | null | transformers | 10,745 | Entry not found |
vegetable/test | a427e05f8b3a5ad64c943635d1f4b2ff1ef22400 | 2022-04-30T02:48:07.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | vegetable | null | vegetable/test | 12 | null | transformers | 10,746 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: test
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.7696078431372549
- name: Recall
type: recall
value: 0.839572192513369
- name: F1
type: f1
value: 0.8030690537084398
- name: Accuracy
type: accuracy
value: 0.8847040737893928
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [hfl/chinese-bert-wwm-ext](https://huggingface.co/hfl/chinese-bert-wwm-ext) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7372
- Precision: 0.7696
- Recall: 0.8396
- F1: 0.8031
- Accuracy: 0.8847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 2 | 1.9496 | 0.0 | 0.0 | 0.0 | 0.4889 |
| No log | 2.0 | 4 | 1.6137 | 0.0 | 0.0 | 0.0 | 0.4919 |
| No log | 3.0 | 6 | 1.3906 | 0.0 | 0.0 | 0.0 | 0.5650 |
| No log | 4.0 | 8 | 1.2273 | 0.0652 | 0.0481 | 0.0554 | 0.6856 |
| No log | 5.0 | 10 | 1.0565 | 0.2051 | 0.1711 | 0.1866 | 0.7125 |
| No log | 6.0 | 12 | 0.9150 | 0.5094 | 0.4332 | 0.4682 | 0.7540 |
| No log | 7.0 | 14 | 0.8051 | 0.5988 | 0.5187 | 0.5559 | 0.7679 |
| No log | 8.0 | 16 | 0.7151 | 0.6707 | 0.5989 | 0.6328 | 0.7763 |
| No log | 9.0 | 18 | 0.6334 | 0.6685 | 0.6364 | 0.6521 | 0.8086 |
| No log | 10.0 | 20 | 0.5693 | 0.6957 | 0.6845 | 0.6900 | 0.8201 |
| No log | 11.0 | 22 | 0.5192 | 0.7166 | 0.7166 | 0.7166 | 0.8363 |
| No log | 12.0 | 24 | 0.4736 | 0.7135 | 0.7326 | 0.7230 | 0.8524 |
| No log | 13.0 | 26 | 0.4448 | 0.6938 | 0.7754 | 0.7323 | 0.8555 |
| No log | 14.0 | 28 | 0.4280 | 0.7177 | 0.8021 | 0.7576 | 0.8586 |
| No log | 15.0 | 30 | 0.4179 | 0.7588 | 0.8075 | 0.7824 | 0.8663 |
| No log | 16.0 | 32 | 0.4214 | 0.7356 | 0.8182 | 0.7747 | 0.8593 |
| No log | 17.0 | 34 | 0.4070 | 0.7391 | 0.8182 | 0.7766 | 0.8616 |
| No log | 18.0 | 36 | 0.4112 | 0.7586 | 0.8235 | 0.7897 | 0.8724 |
| No log | 19.0 | 38 | 0.4530 | 0.7330 | 0.8075 | 0.7684 | 0.8693 |
| No log | 20.0 | 40 | 0.4719 | 0.7766 | 0.8182 | 0.7969 | 0.8732 |
| No log | 21.0 | 42 | 0.4886 | 0.7260 | 0.8075 | 0.7646 | 0.8632 |
| No log | 22.0 | 44 | 0.5007 | 0.7217 | 0.8182 | 0.7669 | 0.8701 |
| No log | 23.0 | 46 | 0.5169 | 0.7321 | 0.8182 | 0.7727 | 0.8762 |
| No log | 24.0 | 48 | 0.5531 | 0.7238 | 0.8128 | 0.7657 | 0.8724 |
| No log | 25.0 | 50 | 0.5895 | 0.7311 | 0.8289 | 0.7769 | 0.8655 |
| No log | 26.0 | 52 | 0.5482 | 0.7330 | 0.8075 | 0.7684 | 0.8778 |
| No log | 27.0 | 54 | 0.5361 | 0.7488 | 0.8128 | 0.7795 | 0.8832 |
| No log | 28.0 | 56 | 0.5378 | 0.7427 | 0.8182 | 0.7786 | 0.8847 |
| No log | 29.0 | 58 | 0.5543 | 0.7371 | 0.8396 | 0.7850 | 0.8824 |
| No log | 30.0 | 60 | 0.5564 | 0.7585 | 0.8396 | 0.7970 | 0.8839 |
| No log | 31.0 | 62 | 0.5829 | 0.7235 | 0.8396 | 0.7772 | 0.8724 |
| No log | 32.0 | 64 | 0.5974 | 0.7269 | 0.8396 | 0.7792 | 0.8716 |
| No log | 33.0 | 66 | 0.5750 | 0.7610 | 0.8342 | 0.7959 | 0.8839 |
| No log | 34.0 | 68 | 0.5887 | 0.7723 | 0.8342 | 0.8021 | 0.8878 |
| No log | 35.0 | 70 | 0.6219 | 0.7441 | 0.8396 | 0.7889 | 0.8747 |
| No log | 36.0 | 72 | 0.6676 | 0.7269 | 0.8396 | 0.7792 | 0.8632 |
| No log | 37.0 | 74 | 0.6517 | 0.7452 | 0.8289 | 0.7848 | 0.8693 |
| No log | 38.0 | 76 | 0.6346 | 0.7828 | 0.8289 | 0.8052 | 0.8862 |
| No log | 39.0 | 78 | 0.6239 | 0.7839 | 0.8342 | 0.8083 | 0.8855 |
| No log | 40.0 | 80 | 0.6360 | 0.7277 | 0.8289 | 0.775 | 0.8762 |
| No log | 41.0 | 82 | 0.6645 | 0.7336 | 0.8396 | 0.7830 | 0.8701 |
| No log | 42.0 | 84 | 0.6611 | 0.7406 | 0.8396 | 0.7870 | 0.8747 |
| No log | 43.0 | 86 | 0.6707 | 0.7488 | 0.8289 | 0.7868 | 0.8762 |
| No log | 44.0 | 88 | 0.6901 | 0.7277 | 0.8289 | 0.775 | 0.8709 |
| No log | 45.0 | 90 | 0.6911 | 0.7393 | 0.8342 | 0.7839 | 0.8709 |
| No log | 46.0 | 92 | 0.6540 | 0.7761 | 0.8342 | 0.8041 | 0.8878 |
| No log | 47.0 | 94 | 0.6381 | 0.7761 | 0.8342 | 0.8041 | 0.8916 |
| No log | 48.0 | 96 | 0.6285 | 0.7745 | 0.8449 | 0.8082 | 0.8885 |
| No log | 49.0 | 98 | 0.6449 | 0.7692 | 0.8556 | 0.8101 | 0.8862 |
| No log | 50.0 | 100 | 0.6809 | 0.7442 | 0.8556 | 0.7960 | 0.8732 |
| No log | 51.0 | 102 | 0.6898 | 0.7395 | 0.8503 | 0.7910 | 0.8716 |
| No log | 52.0 | 104 | 0.6897 | 0.75 | 0.8503 | 0.7970 | 0.8762 |
| No log | 53.0 | 106 | 0.6714 | 0.7656 | 0.8556 | 0.8081 | 0.8855 |
| No log | 54.0 | 108 | 0.6612 | 0.7692 | 0.8556 | 0.8101 | 0.8855 |
| No log | 55.0 | 110 | 0.6583 | 0.7692 | 0.8556 | 0.8101 | 0.8855 |
| No log | 56.0 | 112 | 0.6648 | 0.7692 | 0.8556 | 0.8101 | 0.8855 |
| No log | 57.0 | 114 | 0.6757 | 0.7656 | 0.8556 | 0.8081 | 0.8832 |
| No log | 58.0 | 116 | 0.6803 | 0.7656 | 0.8556 | 0.8081 | 0.8839 |
| No log | 59.0 | 118 | 0.6834 | 0.7692 | 0.8556 | 0.8101 | 0.8862 |
| No log | 60.0 | 120 | 0.6889 | 0.7833 | 0.8503 | 0.8154 | 0.8878 |
| No log | 61.0 | 122 | 0.6963 | 0.7772 | 0.8396 | 0.8072 | 0.8862 |
| No log | 62.0 | 124 | 0.7057 | 0.7772 | 0.8396 | 0.8072 | 0.8862 |
| No log | 63.0 | 126 | 0.7212 | 0.7910 | 0.8503 | 0.8196 | 0.8862 |
| No log | 64.0 | 128 | 0.7334 | 0.7833 | 0.8503 | 0.8154 | 0.8824 |
| No log | 65.0 | 130 | 0.7398 | 0.7833 | 0.8503 | 0.8154 | 0.8801 |
| No log | 66.0 | 132 | 0.7400 | 0.7833 | 0.8503 | 0.8154 | 0.8809 |
| No log | 67.0 | 134 | 0.7345 | 0.7783 | 0.8449 | 0.8103 | 0.8855 |
| No log | 68.0 | 136 | 0.7270 | 0.79 | 0.8449 | 0.8165 | 0.8870 |
| No log | 69.0 | 138 | 0.7245 | 0.7839 | 0.8342 | 0.8083 | 0.8862 |
| No log | 70.0 | 140 | 0.7260 | 0.7868 | 0.8289 | 0.8073 | 0.8847 |
| No log | 71.0 | 142 | 0.7275 | 0.7817 | 0.8235 | 0.8021 | 0.8839 |
| No log | 72.0 | 144 | 0.7283 | 0.7778 | 0.8235 | 0.8000 | 0.8832 |
| No log | 73.0 | 146 | 0.7296 | 0.78 | 0.8342 | 0.8062 | 0.8847 |
| No log | 74.0 | 148 | 0.7344 | 0.7734 | 0.8396 | 0.8051 | 0.8832 |
| No log | 75.0 | 150 | 0.7314 | 0.7745 | 0.8449 | 0.8082 | 0.8824 |
| No log | 76.0 | 152 | 0.7299 | 0.7794 | 0.8503 | 0.8133 | 0.8832 |
| No log | 77.0 | 154 | 0.7282 | 0.7794 | 0.8503 | 0.8133 | 0.8839 |
| No log | 78.0 | 156 | 0.7252 | 0.7783 | 0.8449 | 0.8103 | 0.8839 |
| No log | 79.0 | 158 | 0.7216 | 0.7756 | 0.8503 | 0.8112 | 0.8855 |
| No log | 80.0 | 160 | 0.7194 | 0.7756 | 0.8503 | 0.8112 | 0.8870 |
| No log | 81.0 | 162 | 0.7191 | 0.7756 | 0.8503 | 0.8112 | 0.8878 |
| No log | 82.0 | 164 | 0.7201 | 0.7696 | 0.8396 | 0.8031 | 0.8862 |
| No log | 83.0 | 166 | 0.7211 | 0.7696 | 0.8396 | 0.8031 | 0.8862 |
| No log | 84.0 | 168 | 0.7222 | 0.7696 | 0.8396 | 0.8031 | 0.8862 |
| No log | 85.0 | 170 | 0.7220 | 0.7696 | 0.8396 | 0.8031 | 0.8862 |
| No log | 86.0 | 172 | 0.7239 | 0.7734 | 0.8396 | 0.8051 | 0.8870 |
| No log | 87.0 | 174 | 0.7291 | 0.7772 | 0.8396 | 0.8072 | 0.8847 |
| No log | 88.0 | 176 | 0.7344 | 0.7745 | 0.8449 | 0.8082 | 0.8824 |
| No log | 89.0 | 178 | 0.7373 | 0.7745 | 0.8449 | 0.8082 | 0.8824 |
| No log | 90.0 | 180 | 0.7391 | 0.7707 | 0.8449 | 0.8061 | 0.8832 |
| No log | 91.0 | 182 | 0.7403 | 0.7745 | 0.8449 | 0.8082 | 0.8824 |
| No log | 92.0 | 184 | 0.7412 | 0.7745 | 0.8449 | 0.8082 | 0.8832 |
| No log | 93.0 | 186 | 0.7417 | 0.7707 | 0.8449 | 0.8061 | 0.8832 |
| No log | 94.0 | 188 | 0.7402 | 0.7745 | 0.8449 | 0.8082 | 0.8839 |
| No log | 95.0 | 190 | 0.7389 | 0.7745 | 0.8449 | 0.8082 | 0.8847 |
| No log | 96.0 | 192 | 0.7381 | 0.7696 | 0.8396 | 0.8031 | 0.8839 |
| No log | 97.0 | 194 | 0.7377 | 0.7696 | 0.8396 | 0.8031 | 0.8847 |
| No log | 98.0 | 196 | 0.7374 | 0.7696 | 0.8396 | 0.8031 | 0.8847 |
| No log | 99.0 | 198 | 0.7372 | 0.7696 | 0.8396 | 0.8031 | 0.8847 |
| No log | 100.0 | 200 | 0.7372 | 0.7696 | 0.8396 | 0.8031 | 0.8847 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
cfilt/HiNER-original-xlm-roberta-large | 94dac1de022fa75c441c2e898e85e6da270daf2a | 2022-05-02T10:19:28.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:cfilt/HiNER-original",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | cfilt | null | cfilt/HiNER-original-xlm-roberta-large | 12 | null | transformers | 10,747 | ---
tags:
- generated_from_trainer
datasets:
- cfilt/HiNER-original
metrics:
- precision
- recall
- f1
model-index:
- name: HiNER-original-xlm-roberta-large
results:
- task:
name: Token Classification
type: token-classification
dataset:
type: cfilt/HiNER-original
name: HiNER Original
metrics:
- name: Precision
type: precision
value: 0.8968858782575971
- name: Recall
type: recall
value: 0.8871207891308394
- name: F1
type: f1
value: 0.8919766081871345
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HiNER-original-xlm-roberta-large
This model was trained from scratch on HiNER-original dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.14.0
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
ali2066/DistilBERTFINAL_ctxSentence_TRAIN_all_TEST_french_second_train_set_NULL_True | 4932fde06e2a5d1694dce821c5a2fd99ba53b3e5 | 2022-05-02T14:07:36.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | ali2066 | null | ali2066/DistilBERTFINAL_ctxSentence_TRAIN_all_TEST_french_second_train_set_NULL_True | 12 | null | transformers | 10,748 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: DistilBERTFINAL_ctxSentence_TRAIN_all_TEST_french_second_train_set_NULL_True
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERTFINAL_ctxSentence_TRAIN_all_TEST_french_second_train_set_NULL_True
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4024
- Precision: 0.8643
- Recall: 0.9769
- F1: 0.9171
- Accuracy: 0.8594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 130 | 0.4920 | 0.7766 | 1.0 | 0.8742 | 0.7766 |
| No log | 2.0 | 260 | 0.4469 | 0.7885 | 1.0 | 0.8818 | 0.7918 |
| No log | 3.0 | 390 | 0.3860 | 0.8248 | 0.9860 | 0.8982 | 0.8265 |
| 0.462 | 4.0 | 520 | 0.3948 | 0.8441 | 0.9832 | 0.9084 | 0.8460 |
| 0.462 | 5.0 | 650 | 0.3694 | 0.8632 | 0.9693 | 0.9132 | 0.8568 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
atomsspawn/DialoGPT-small-shelbot | 07516eb879bcde2854f589f3d81599cfe48bd660 | 2022-05-17T20:31:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | atomsspawn | null | atomsspawn/DialoGPT-small-shelbot | 12 | null | transformers | 10,749 | ---
tags:
- conversational
---
# Sheldon Cooper DialoGPT Model |
ali2066/DistilBERT_FINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False | 4cc122ab0c7d4943984eff60cd119141ac2943d5 | 2022-05-02T18:23:52.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | ali2066 | null | ali2066/DistilBERT_FINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False | 12 | null | transformers | 10,750 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: DistilBERT_FINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERT_FINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7321
- Precision: 0.9795
- Recall: 0.7277
- F1: 0.835
- Accuracy: 0.7208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 130 | 0.3755 | 0.8521 | 0.9910 | 0.9163 | 0.8529 |
| No log | 2.0 | 260 | 0.3352 | 0.8875 | 0.9638 | 0.9241 | 0.8713 |
| No log | 3.0 | 390 | 0.3370 | 0.8918 | 0.9321 | 0.9115 | 0.8529 |
| 0.4338 | 4.0 | 520 | 0.3415 | 0.8957 | 0.9321 | 0.9135 | 0.8566 |
| 0.4338 | 5.0 | 650 | 0.3416 | 0.8918 | 0.9321 | 0.9115 | 0.8529 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
masakhane/m2m100_418M-FR-NEWS | b49b945102620b0a54c8011ef50f1e292a6dcd71 | 2022-05-12T13:43:29.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
]
| text2text-generation | false | masakhane | null | masakhane/m2m100_418M-FR-NEWS | 12 | null | transformers | 10,751 | ---
license: afl-3.0
---
|
enimai/opus-mt-en-de-finetuned-en-to-de | d40c5249f29423d19c94f3bbcc5cc33ce63ea7f9 | 2022-05-03T15:57:16.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | enimai | null | enimai/opus-mt-en-de-finetuned-en-to-de | 12 | null | transformers | 10,752 | ---
license: apache-2.0
---
|
enimai/opus-mt-en-hi-finetuned-en-to-hi | f32133f8d0a0d90eafb60e45073ed843841a67ae | 2022-05-03T16:29:04.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | enimai | null | enimai/opus-mt-en-hi-finetuned-en-to-hi | 12 | null | transformers | 10,753 | ---
license: apache-2.0
---
|
ml4pubmed/biobert-v1.1_pub_section | 445f0a103a0817cc174f0681c8af9db0fd0c4792 | 2022-05-04T00:02:48.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:pubmed",
"transformers"
]
| text-classification | false | ml4pubmed | null | ml4pubmed/biobert-v1.1_pub_section | 12 | null | transformers | 10,754 | ---
language:
- en
datasets:
- pubmed
metrics:
- f1
pipeline_tag: text-classification
widget:
- text: "Many pathogenic processes and diseases are the result of an erroneous activation of the complement cascade and a number of inhibitors of complement have thus been examined for anti-inflammatory actions."
example_title: "BACKGROUND example"
- text: "A total of 192 MI patients and 140 control persons were included."
example_title: "METHODS example"
- text: "MI patients had 18 % higher plasma levels of MAp44 (IQR 11-25 %) as compared to the healthy control group (p < 0. 001.)"
example_title: "RESULTS example"
- text: "The finding that a brief CB group intervention delivered by real-world providers significantly reduced MDD onset relative to both brochure control and bibliotherapy is very encouraging, although effects on continuous outcome measures were small or nonsignificant and approximately half the magnitude of those found in efficacy research, potentially because the present sample reported lower initial depression."
example_title: "CONCLUSIONS example"
- text: "In order to understand and update the prevalence of myopia in Taiwan, a nationwide survey was performed in 1995."
example_title: "OBJECTIVE example"
---
# biobert-v1.1_pub_section
- original model file name: textclassifer_biobert-v1.1_pubmed_20k
- This is a fine-tuned checkpoint of `dmis-lab/biobert-v1.1` for document section text classification
- possible document section classes are:BACKGROUND, CONCLUSIONS, METHODS, OBJECTIVE, RESULTS,
## metadata
### training_metrics
- val_accuracy: 0.8522772192955017
- val_matthewscorrcoef: 0.8009328246116638
- val_f1score: 0.8517481088638306
- val_cross_entropy: 0.4344026446342468
- epoch: 12.0
- train_accuracy_step: 0.8203125
- train_matthewscorrcoef_step: 0.7453048229217529
- train_f1score_step: 0.8245896100997925
- train_cross_entropy_step: 0.480397492647171
- train_accuracy_epoch: 0.8297363519668579
- train_matthewscorrcoef_epoch: 0.7703952193260193
- train_f1score_epoch: 0.8274592757225037
- train_cross_entropy_epoch: 0.5001224875450134
- test_accuracy: 0.8441678881645203
- test_matthewscorrcoef: 0.7905130982398987
- test_f1score: 0.8435087203979492
- test_cross_entropy: 0.4557005763053894
- date_run: Apr-22-2022_t-14
- huggingface_tag: dmis-lab/biobert-v1.1
|
ml4pubmed/scibert-scivocab-uncased_pub_section | ba7656f774cdddca4bb441f903f7873afe25e9d6 | 2022-06-22T10:59:11.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:pubmed",
"transformers",
"document sections",
"sentence classification",
"document classification",
"medical",
"health",
"biomedical"
]
| text-classification | false | ml4pubmed | null | ml4pubmed/scibert-scivocab-uncased_pub_section | 12 | null | transformers | 10,755 | ---
language:
- en
datasets:
- pubmed
metrics:
- f1
pipeline_tag: text-classification
tags:
- text-classification
- document sections
- sentence classification
- document classification
- medical
- health
- biomedical
widget:
- text: "many pathogenic processes and diseases are the result of an erroneous activation of the complement cascade and a number of inhibitors of complement have thus been examined for anti-inflammatory actions."
example_title: "background example"
- text: "a total of 192 mi patients and 140 control persons were included."
example_title: "methods example"
- text: "mi patients had 18 % higher plasma levels of map44 (iqr 11-25 %) as compared to the healthy control group (p < 0. 001.)"
example_title: "results example"
- text: "the finding that a brief cb group intervention delivered by real-world providers significantly reduced mdd onset relative to both brochure control and bibliotherapy is very encouraging, although effects on continuous outcome measures were small or nonsignificant and approximately half the magnitude of those found in efficacy research, potentially because the present sample reported lower initial depression."
example_title: "conclusions example"
- text: "in order to understand and update the prevalence of myopia in taiwan, a nationwide survey was performed in 1995."
example_title: "objective example"
---
# scibert-scivocab-uncased_pub_section
- original model file name: textclassifer_scibert_scivocab_uncased_pubmed_full
- This is a fine-tuned checkpoint of `allenai/scibert_scivocab_uncased` for document section text classification
- possible document section classes are:BACKGROUND, CONCLUSIONS, METHODS, OBJECTIVE, RESULTS,
## usage in python
install transformers as needed: `pip install -U transformers`
run the following, changing the example text to your use case:
```
from transformers import pipeline
model_tag = "ml4pubmed/scibert-scivocab-uncased_pub_section"
classifier = pipeline(
'text-classification',
model=model_tag,
)
prompt = """
Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train.
"""
classifier(
prompt,
) # classify the sentence
```
## metadata
### training_metrics
- date_run: Apr-25-2022_t-03
- huggingface_tag: allenai/scibert_scivocab_uncased
### training_parameters
- date_run: Apr-25-2022_t-03
- huggingface_tag: allenai/scibert_scivocab_uncased
|
dkasti/distilbert-base-uncased-finetuned-emotion | 0f8b949ad83d90ff8cafb22a40a7fc79e458a763 | 2022-05-04T05:03:46.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | dkasti | null | dkasti/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,756 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9247463289719563
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2223
- Accuracy: 0.9245
- F1: 0.9247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8296 | 1.0 | 250 | 0.3200 | 0.902 | 0.9002 |
| 0.2522 | 2.0 | 500 | 0.2223 | 0.9245 | 0.9247 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
cwkeam/mctct-large | c0fab5422e4bb621097c18bf96a1cd2bbc7048e0 | 2022-05-05T11:02:00.000Z | [
"pytorch",
"mctct",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"dataset:common_voice",
"arxiv:2111.00161",
"transformers",
"speech",
"license:apache-2.0"
]
| automatic-speech-recognition | false | cwkeam | null | cwkeam/mctct-large | 12 | null | transformers | 10,757 | ---
language: en
datasets:
- librispeech_asr
- common_voice
tags:
- speech
license: apache-2.0
---
# M-CTC-T
Massively multilingual speech recognizer from Meta AI. The model is a 1B-param transformer encoder, with a CTC head over 8065 character labels and a language identification head over 60 language ID labels. It is trained on Common Voice (version 6.1, December 2020 release) and VoxPopuli. After training on Common Voice and VoxPopuli, the model is trained on Common Voice only. The labels are unnormalized character-level transcripts (punctuation and capitalization are not removed). The model takes as input Mel filterbank features from a 16Khz audio signal.

The original Flashlight code, model checkpoints, and Colab notebook can be found at https://github.com/flashlight/wav2letter/tree/main/recipes/mling_pl .
## Citation
[Paper](https://arxiv.org/abs/2111.00161)
Authors: Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, Ronan Collobert
```
@article{lugosch2021pseudo,
title={Pseudo-Labeling for Massively Multilingual Speech Recognition},
author={Lugosch, Loren and Likhomanenko, Tatiana and Synnaeve, Gabriel and Collobert, Ronan},
journal={ICASSP},
year={2022}
}
```
Additional thanks to [Chan Woo Kim](https://huggingface.co/cwkeam) and [Patrick von Platen](https://huggingface.co/patrickvonplaten) for porting the model from Flashlight to PyTorch.
# Training method
 TO-DO: replace with the training diagram from paper
For more information on how the model was trained, please take a look at the [official paper](https://arxiv.org/abs/2111.00161).
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import MCTCTForCTC, MCTCTProcessor
model = MCTCTForCTC.from_pretrained("speechbrain/mctct-large")
processor = MCTCTProcessor.from_pretrained("speechbrain/mctct-large")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_features = processor(ds[0]["audio"]["array"], return_tensors="pt").input_features
# retrieve logits
logits = model(input_features).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
Results for Common Voice, averaged over all languages:
*Character error rate (CER)*:
| Valid | Test |
|-------|------|
| 21.4 | 23.3 |
|
brjezierski/bert-finetuned-ner | 7f01546dbdc3df17a7febc2f69a89a3083aa5cc8 | 2022-05-06T21:10:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | brjezierski | null | brjezierski/bert-finetuned-ner | 12 | null | transformers | 10,758 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9340841338191455
- name: Recall
type: recall
value: 0.9491753618310333
- name: F1
type: f1
value: 0.9415692821368947
- name: Accuracy
type: accuracy
value: 0.9853858833225407
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0632
- Precision: 0.9341
- Recall: 0.9492
- F1: 0.9416
- Accuracy: 0.9854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0871 | 1.0 | 1756 | 0.0631 | 0.9221 | 0.9381 | 0.9300 | 0.9836 |
| 0.0406 | 2.0 | 3512 | 0.0619 | 0.9259 | 0.9490 | 0.9373 | 0.9849 |
| 0.0205 | 3.0 | 5268 | 0.0632 | 0.9341 | 0.9492 | 0.9416 | 0.9854 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ChainYo/t5-base-sede-txt2sql | bf06838fc7182603f0a8609fe63abd60a9d478e6 | 2022-05-07T18:50:12.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:sede",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | ChainYo | null | ChainYo/t5-base-sede-txt2sql | 12 | null | transformers | 10,759 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- sede
model-index:
- name: t5-base-sede-txt2sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-sede-txt2sql
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the sede dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1577
- Bleu Score: 0.5923
- Parsable Queries Accuracy: 0.0
- Partial Match F1: 0.0
- Partial Match F1 No Values: 0.0
- Partial Match Em: 0.0
- Partial Match No Values Em: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu Score | Parsable Queries Accuracy | Partial Match F1 | Partial Match F1 No Values | Partial Match Em | Partial Match No Values Em |
|:-------------:|:-----:|:----:|:---------------:|:----------:|:-------------------------:|:----------------:|:--------------------------:|:----------------:|:--------------------------:|
| No log | 1.0 | 95 | 13.2410 | 0.0069 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 2.0 | 190 | 7.6317 | 0.0134 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 3.0 | 285 | 6.0919 | 0.0058 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 4.0 | 380 | 5.4922 | 0.0021 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 5.0 | 475 | 4.7151 | 0.0009 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 12.0698 | 6.0 | 570 | 4.1412 | 0.0003 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 12.0698 | 7.0 | 665 | 3.6398 | 0.0003 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 12.0698 | 8.0 | 760 | 3.2643 | 0.0009 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 12.0698 | 9.0 | 855 | 3.0544 | 0.0013 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 12.0698 | 10.0 | 950 | 2.8015 | 0.0043 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 4.696 | 11.0 | 1045 | 2.5552 | 0.0789 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 4.696 | 12.0 | 1140 | 2.3535 | 0.1036 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 4.696 | 13.0 | 1235 | 2.2132 | 0.0050 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 4.696 | 14.0 | 1330 | 2.1084 | 0.1333 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 4.696 | 15.0 | 1425 | 2.0117 | 0.2972 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 3.1348 | 16.0 | 1520 | 1.9333 | 0.2481 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 3.1348 | 17.0 | 1615 | 1.8395 | 0.4149 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 3.1348 | 18.0 | 1710 | 1.7661 | 0.5439 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 3.1348 | 19.0 | 1805 | 1.7101 | 0.6001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 3.1348 | 20.0 | 1900 | 1.6562 | 0.6219 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 3.1348 | 21.0 | 1995 | 1.6073 | 0.5865 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.4276 | 22.0 | 2090 | 1.5773 | 0.5683 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.4276 | 23.0 | 2185 | 1.5478 | 0.5408 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.4276 | 24.0 | 2280 | 1.5190 | 0.5749 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.4276 | 25.0 | 2375 | 1.4927 | 0.5818 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.4276 | 26.0 | 2470 | 1.4671 | 0.5673 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.076 | 27.0 | 2565 | 1.4499 | 0.5616 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.076 | 28.0 | 2660 | 1.4275 | 0.6041 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.076 | 29.0 | 2755 | 1.4096 | 0.5764 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.076 | 30.0 | 2850 | 1.3983 | 0.5862 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.076 | 31.0 | 2945 | 1.3812 | 0.5982 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.8828 | 32.0 | 3040 | 1.3679 | 0.5927 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.8828 | 33.0 | 3135 | 1.3548 | 0.5916 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.8828 | 34.0 | 3230 | 1.3461 | 0.5769 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.8828 | 35.0 | 3325 | 1.3353 | 0.5871 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.8828 | 36.0 | 3420 | 1.3293 | 0.5687 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.7602 | 37.0 | 3515 | 1.3195 | 0.5689 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.7602 | 38.0 | 3610 | 1.3109 | 0.5949 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.7602 | 39.0 | 3705 | 1.3049 | 0.5619 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.7602 | 40.0 | 3800 | 1.2953 | 0.5872 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.7602 | 41.0 | 3895 | 1.2907 | 0.6014 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.7602 | 42.0 | 3990 | 1.2831 | 0.5917 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.6652 | 43.0 | 4085 | 1.2757 | 0.5718 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.6652 | 44.0 | 4180 | 1.2692 | 0.5707 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.6652 | 45.0 | 4275 | 1.2642 | 0.5758 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.6652 | 46.0 | 4370 | 1.2619 | 0.6012 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.6652 | 47.0 | 4465 | 1.2527 | 0.5749 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.6009 | 48.0 | 4560 | 1.2496 | 0.5722 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.6009 | 49.0 | 4655 | 1.2447 | 0.5633 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.6009 | 50.0 | 4750 | 1.2411 | 0.5615 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.6009 | 51.0 | 4845 | 1.2356 | 0.5691 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.6009 | 52.0 | 4940 | 1.2322 | 0.5636 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.5481 | 53.0 | 5035 | 1.2285 | 0.5724 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.5481 | 54.0 | 5130 | 1.2255 | 0.5771 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.5481 | 55.0 | 5225 | 1.2201 | 0.5827 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.5481 | 56.0 | 5320 | 1.2181 | 0.5928 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.5481 | 57.0 | 5415 | 1.2152 | 0.5599 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.5082 | 58.0 | 5510 | 1.2123 | 0.5779 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.5082 | 59.0 | 5605 | 1.2083 | 0.5609 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.5082 | 60.0 | 5700 | 1.2070 | 0.5654 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.5082 | 61.0 | 5795 | 1.2036 | 0.5566 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.5082 | 62.0 | 5890 | 1.2011 | 0.5569 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.5082 | 63.0 | 5985 | 1.1993 | 0.5567 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4799 | 64.0 | 6080 | 1.1958 | 0.5619 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4799 | 65.0 | 6175 | 1.1950 | 0.5691 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4799 | 66.0 | 6270 | 1.1914 | 0.5572 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4799 | 67.0 | 6365 | 1.1879 | 0.5635 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4799 | 68.0 | 6460 | 1.1866 | 0.5654 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4475 | 69.0 | 6555 | 1.1850 | 0.5575 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4475 | 70.0 | 6650 | 1.1833 | 0.5507 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4475 | 71.0 | 6745 | 1.1820 | 0.5493 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4475 | 72.0 | 6840 | 1.1786 | 0.5525 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4475 | 73.0 | 6935 | 1.1789 | 0.5615 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4233 | 74.0 | 7030 | 1.1770 | 0.5603 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4233 | 75.0 | 7125 | 1.1749 | 0.5699 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4233 | 76.0 | 7220 | 1.1754 | 0.5730 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4233 | 77.0 | 7315 | 1.1735 | 0.5798 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4233 | 78.0 | 7410 | 1.1716 | 0.5771 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4101 | 79.0 | 7505 | 1.1699 | 0.5800 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4101 | 80.0 | 7600 | 1.1675 | 0.5736 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4101 | 81.0 | 7695 | 1.1661 | 0.5845 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4101 | 82.0 | 7790 | 1.1659 | 0.5974 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4101 | 83.0 | 7885 | 1.1664 | 0.5825 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4101 | 84.0 | 7980 | 1.1647 | 0.5871 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3965 | 85.0 | 8075 | 1.1639 | 0.5772 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3965 | 86.0 | 8170 | 1.1628 | 0.5826 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3965 | 87.0 | 8265 | 1.1615 | 0.5960 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3965 | 88.0 | 8360 | 1.1616 | 0.5908 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3965 | 89.0 | 8455 | 1.1613 | 0.5775 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3835 | 90.0 | 8550 | 1.1604 | 0.5917 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3835 | 91.0 | 8645 | 1.1597 | 0.5732 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3835 | 92.0 | 8740 | 1.1594 | 0.5767 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3835 | 93.0 | 8835 | 1.1584 | 0.5719 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3835 | 94.0 | 8930 | 1.1581 | 0.5700 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3766 | 95.0 | 9025 | 1.1583 | 0.5845 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3766 | 96.0 | 9120 | 1.1578 | 0.5808 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3766 | 97.0 | 9215 | 1.1578 | 0.5889 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3766 | 98.0 | 9310 | 1.1577 | 0.5851 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3766 | 99.0 | 9405 | 1.1578 | 0.5923 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3726 | 100.0 | 9500 | 1.1577 | 0.5923 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Jeevesh8/bert_ft_qqp-0 | 17645bf0b6f171d517ac3e9a13f50eb1908b5b4d | 2022-05-07T12:10:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-0 | 12 | null | transformers | 10,760 | Entry not found |
theojolliffe/distilbart-cnn-arxiv-pubmed | ee16b09c909770f31a2a53f0eb5e150d839db3e4 | 2022-05-07T19:16:46.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:scientific_papers",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | theojolliffe | null | theojolliffe/distilbart-cnn-arxiv-pubmed | 12 | null | transformers | 10,761 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- scientific_papers
metrics:
- rouge
model-index:
- name: distilbart-cnn-12-6-finetuned-arxiv-pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: scientific_papers
type: scientific_papers
args: pubmed
metrics:
- name: Rouge1
type: rouge
value: 35.9398
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-finetuned-arxiv-pubmed
This model is a fine-tuned version of [theojolliffe/distilbart-cnn-12-6-finetuned-arxiv](https://huggingface.co/theojolliffe/distilbart-cnn-12-6-finetuned-arxiv) on the scientific_papers dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2214
- Rouge1: 35.9398
- Rouge2: 14.8037
- Rougel: 22.4263
- Rougelsum: 32.4106
- Gen Len: 135.5783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.4342 | 1.0 | 7496 | 2.2214 | 35.9398 | 14.8037 | 22.4263 | 32.4106 | 135.5783 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
shibing624/bert4ner-base-uncased | a0011f0880da6a53d90fa1380b7ab45a7ee6944d | 2022-05-09T09:05:56.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"transformers",
"ner",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | shibing624 | null | shibing624/bert4ner-base-uncased | 12 | 1 | transformers | 10,762 | ---
language:
- en
tags:
- bert
- pytorch
- en
- ner
license: "apache-2.0"
---
# BERT for English Named Entity Recognition(bert4ner) Model
英文实体识别模型
`bert4ner-base-uncased` evaluate CoNLL-2003 test data:
The overall performance of BERT on CoNLL-2003 **test**:
| | Accuracy | Recall | F1 |
| ------------ | ------------------ | ------------------ | ------------------ |
| BertSoftmax | 0.8956 | 0.9132 | 0.9043 |
在CoNLL-2003的测试集上达到接近SOTA水平。
BertSoftmax的网络结构(原生BERT)。
本项目开源在实体识别项目:[nerpy](https://github.com/shibing624/nerpy),可支持bert4ner模型,通过如下命令调用:
#### 英文实体识别:
```shell
>>> from nerpy import NERModel
>>> model = NERModel("bert", "shibing624/bert4ner-base-uncased")
>>> predictions, raw_outputs, entities = model.predict(["AL-AIN, United Arab Emirates 1996-12-06"], split_on_space=True)
entities: [('AL-AIN,', 'LOC'), ('United Arab Emirates', 'LOC')]
```
模型文件组成:
```
bert4ner-base-uncased
├── config.json
├── model_args.json
├── pytorch_model.bin
├── special_tokens_map.json
├── tokenizer_config.json
└── vocab.txt
```
## Usage (HuggingFace Transformers)
Without [nerpy](https://github.com/shibing624/nerpy), you can use the model like this:
First, you pass your input through the transformer model, then you have to apply the bio tag to get the entity words.
Install package:
```
pip install transformers seqeval
```
```python
import os
import torch
from transformers import AutoTokenizer, AutoModelForTokenClassification
from seqeval.metrics.sequence_labeling import get_entities
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("shibing624/bert4ner-base-uncased")
model = AutoModelForTokenClassification.from_pretrained("shibing624/bert4ner-base-uncased")
label_list = ["E-ORG", "E-LOC", "S-MISC", "I-MISC", "S-PER", "E-PER", "B-MISC", "O", "S-LOC",
"E-MISC", "B-ORG", "S-ORG", "I-ORG", "B-LOC", "I-LOC", "B-PER", "I-PER"]
sentence = "AL-AIN, United Arab Emirates 1996-12-06"
def get_entity(sentence):
tokens = tokenizer.tokenize(sentence)
inputs = tokenizer.encode(sentence, return_tensors="pt")
with torch.no_grad():
outputs = model(inputs).logits
predictions = torch.argmax(outputs, dim=2)
word_tags = [(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].numpy()[1:-1])]
print(sentence)
print(word_tags)
pred_labels = [i[1] for i in word_tags]
entities = []
line_entities = get_entities(pred_labels)
for i in line_entities:
word = tokens[i[1]: i[2] + 1]
entity_type = i[0]
entities.append((word, entity_type))
print("Sentence entity:")
print(entities)
get_entity(sentence)
```
### 数据集
#### 实体识别数据集
| 数据集 | 语料 | 下载链接 | 文件大小 |
| :------- | :--------- | :---------: | :---------: |
| **`CNER中文实体识别数据集`** | CNER(12万字) | [CNER github](https://github.com/shibing624/nerpy/tree/main/examples/data/cner)| 1.1MB |
| **`PEOPLE中文实体识别数据集`** | 人民日报数据集(200万字) | [PEOPLE github](https://github.com/shibing624/nerpy/tree/main/examples/data/people)| 12.8MB |
| **`CoNLL03英文实体识别数据集`** | CoNLL-2003数据集(22万字) | [CoNLL03 github](https://github.com/shibing624/nerpy/tree/main/examples/data/conll03)| 1.7MB |
### input format
Input format (prefer BIOES tag scheme), with each character its label for one line. Sentences are splited with a null line.
```text
EU S-ORG
rejects O
German S-MISC
call O
to O
boycott O
British S-MISC
lamb O
. O
Peter B-PER
Blackburn E-PER
```
如果需要训练bert4ner,请参考[https://github.com/shibing624/nerpy/tree/main/examples](https://github.com/shibing624/nerpy/tree/main/examples)
## Citation
```latex
@software{nerpy,
author = {Xu Ming},
title = {nerpy: Named Entity Recognition toolkit},
year = {2022},
url = {https://github.com/shibing624/nerpy},
}
```
|
allermat/distilbert-base-uncased-finetuned-emotion | eec3d837edc52d4b2b7baeab3e3992df013286f4 | 2022-07-13T15:20:51.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | allermat | null | allermat/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,763 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9233300539962602
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2244
- Accuracy: 0.923
- F1: 0.9233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8412 | 1.0 | 250 | 0.3186 | 0.904 | 0.9022 |
| 0.2501 | 2.0 | 500 | 0.2244 | 0.923 | 0.9233 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
JoanTirant/roberta-base-bne-finetuned-amazon_reviews_multi | 1a8e16e597c1b152bc8236ee10b420207ea21f26 | 2022-05-10T08:40:55.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | JoanTirant | null | JoanTirant/roberta-base-bne-finetuned-amazon_reviews_multi | 12 | null | transformers | 10,764 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.93425
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2291
- Accuracy: 0.9343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1909 | 1.0 | 1250 | 0.1784 | 0.9295 |
| 0.1013 | 2.0 | 2500 | 0.2291 | 0.9343 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
CEBaB/bert-base-uncased.CEBaB.sa.3-class.exclusive.seed_42 | 70ce805d2148f60f46aaa6fa6dc93146905741a2 | 2022-05-10T23:38:33.000Z | [
"pytorch",
"bert",
"transformers"
]
| null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.sa.3-class.exclusive.seed_42 | 12 | null | transformers | 10,765 | Entry not found |
CEBaB/gpt2.CEBaB.sa.5-class.exclusive.seed_42 | 9bb2e23db865006ea01e4e840de07e8c3f0e7bb4 | 2022-05-11T00:07:04.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| null | false | CEBaB | null | CEBaB/gpt2.CEBaB.sa.5-class.exclusive.seed_42 | 12 | null | transformers | 10,766 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.sa.2-class.exclusive.seed_66 | 6e3451c4e138d40221f290988582cf397eb3ab92 | 2022-05-11T00:13:38.000Z | [
"pytorch",
"bert",
"transformers"
]
| null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.sa.2-class.exclusive.seed_66 | 12 | null | transformers | 10,767 | Entry not found |
CEBaB/gpt2.CEBaB.sa.5-class.exclusive.seed_66 | d7f6b0eedff1e03a9f7f3b52652ef63f6c5d9d27 | 2022-05-11T00:58:51.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| null | false | CEBaB | null | CEBaB/gpt2.CEBaB.sa.5-class.exclusive.seed_66 | 12 | null | transformers | 10,768 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.sa.2-class.exclusive.seed_77 | 46655e6b2d35a744f50f618f191edfbe66cd6f5b | 2022-05-11T01:05:26.000Z | [
"pytorch",
"bert",
"transformers"
]
| null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.sa.2-class.exclusive.seed_77 | 12 | null | transformers | 10,769 | Entry not found |
CEBaB/gpt2.CEBaB.sa.5-class.exclusive.seed_77 | 2321dbbc4e4e8090ead9957138d46991da9299a9 | 2022-05-11T01:51:08.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| null | false | CEBaB | null | CEBaB/gpt2.CEBaB.sa.5-class.exclusive.seed_77 | 12 | null | transformers | 10,770 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.sa.2-class.exclusive.seed_88 | 00c99284efde48e92111b40026b7f51278f76323 | 2022-05-11T01:57:55.000Z | [
"pytorch",
"bert",
"transformers"
]
| null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.sa.2-class.exclusive.seed_88 | 12 | null | transformers | 10,771 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.sa.2-class.exclusive.seed_99 | 960e21617611136cb71cc76ac148043ac82bff04 | 2022-05-11T02:49:21.000Z | [
"pytorch",
"bert",
"transformers"
]
| null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.sa.2-class.exclusive.seed_99 | 12 | null | transformers | 10,772 | Entry not found |
SalamaThanks/SalamaThanksTransformer_en2fil_v2 | 99452bf272a6ea72f0787db5a373984376419175 | 2022-05-11T05:58:25.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
]
| text2text-generation | false | SalamaThanks | null | SalamaThanks/SalamaThanksTransformer_en2fil_v2 | 12 | null | transformers | 10,773 | ---
license: afl-3.0
---
SalamaThanks Transformer for English-to-Filipino Text Translation version 2.
A finetuned transformer model based on the Helsinki-NLP/opus-mt-en-tl transformer model. |
idsedykh/model1 | 1f2906fb6270afa48fb73afb00e1202def80040f | 2022-05-11T19:03:10.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | idsedykh | null | idsedykh/model1 | 12 | null | transformers | 10,774 | Entry not found |
eslamxm/mt5-base-finetuned-english-finetuned-english-arabic | c3a3fb4f6afac0be24667ddf4100e01b7294f5f0 | 2022-05-13T19:39:26.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"dataset:xlsum",
"transformers",
"summarization",
"arabic",
"ar",
"en",
"Abstractive Summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| summarization | false | eslamxm | null | eslamxm/mt5-base-finetuned-english-finetuned-english-arabic | 12 | null | transformers | 10,775 | ---
license: apache-2.0
tags:
- summarization
- arabic
- ar
- en
- mt5
- Abstractive Summarization
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: mt5-base-finetuned-english-finetuned-english-arabic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-english-finetuned-english-arabic
This model is a fine-tuned version of [eslamxm/mt5-base-finetuned-english](https://huggingface.co/eslamxm/mt5-base-finetuned-english) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4788
- Rouge-1: 22.55
- Rouge-2: 9.84
- Rouge-l: 20.5
- Gen Len: 19.0
- Bertscore: 71.39
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 4.999 | 1.0 | 1172 | 3.9343 | 17.67 | 5.93 | 15.86 | 19.0 | 69.69 |
| 4.008 | 2.0 | 2344 | 3.6655 | 19.48 | 7.67 | 17.67 | 19.0 | 70.49 |
| 3.7463 | 3.0 | 3516 | 3.5503 | 20.47 | 8.24 | 18.6 | 19.0 | 70.86 |
| 3.5924 | 4.0 | 4688 | 3.4942 | 20.95 | 8.45 | 19.05 | 19.0 | 71.0 |
| 3.4979 | 5.0 | 5860 | 3.4788 | 21.34 | 8.75 | 19.39 | 19.0 | 71.11 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Leizhang/distilbert-base-uncased-finetuned-emotion | 3ed7f1d85960ea53ccfb1ea904c9e21f34630690 | 2022-05-14T20:55:21.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Leizhang | null | Leizhang/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,776 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.12.1
|
importsmart/bert-to-distilbert-NER | 6c03e95e50b1ebc826685e8b6b949ae641d8755c | 2022-05-16T18:02:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | importsmart | null | importsmart/bert-to-distilbert-NER | 12 | null | transformers | 10,777 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-to-distilbert-NER
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.014488935721812434
- name: Recall
type: recall
value: 0.018512285425782565
- name: F1
type: f1
value: 0.016255356878971478
- name: Accuracy
type: accuracy
value: 0.7597280273150055
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-to-distilbert-NER
This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 44.0386
- Precision: 0.0145
- Recall: 0.0185
- F1: 0.0163
- Accuracy: 0.7597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 201.4012 | 1.0 | 110 | 133.7231 | 0.0153 | 0.0106 | 0.0125 | 0.7539 |
| 106.9317 | 2.0 | 220 | 99.3629 | 0.0266 | 0.0305 | 0.0284 | 0.7593 |
| 81.3601 | 3.0 | 330 | 80.3763 | 0.0159 | 0.0214 | 0.0183 | 0.7604 |
| 63.8325 | 4.0 | 440 | 67.7620 | 0.0179 | 0.0244 | 0.0207 | 0.7599 |
| 52.0271 | 5.0 | 550 | 59.0806 | 0.0203 | 0.0268 | 0.0231 | 0.7598 |
| 44.4419 | 6.0 | 660 | 55.3208 | 0.0211 | 0.0278 | 0.0240 | 0.7603 |
| 39.2351 | 7.0 | 770 | 52.4510 | 0.0170 | 0.0222 | 0.0193 | 0.7598 |
| 35.3438 | 8.0 | 880 | 50.4576 | 0.0205 | 0.0268 | 0.0232 | 0.7604 |
| 32.7385 | 9.0 | 990 | 48.3418 | 0.0173 | 0.0227 | 0.0197 | 0.7595 |
| 30.6531 | 10.0 | 1100 | 46.7304 | 0.0147 | 0.0188 | 0.0165 | 0.7600 |
| 29.0811 | 11.0 | 1210 | 46.3386 | 0.0151 | 0.0190 | 0.0168 | 0.7599 |
| 27.9501 | 12.0 | 1320 | 45.4516 | 0.0163 | 0.0204 | 0.0181 | 0.7604 |
| 26.7452 | 13.0 | 1430 | 44.3425 | 0.0154 | 0.0199 | 0.0173 | 0.7592 |
| 25.5367 | 14.0 | 1540 | 44.0415 | 0.0146 | 0.0190 | 0.0165 | 0.7594 |
| 24.5507 | 15.0 | 1650 | 44.0386 | 0.0145 | 0.0185 | 0.0163 | 0.7597 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
huggingtweets/cryptanime | 02fdbdeffcf7bb1c1b501111f13c8cac2360b86a | 2022-05-17T06:54:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/cryptanime | 12 | null | transformers | 10,778 | ---
language: en
thumbnail: http://www.huggingtweets.com/cryptanime/1652770465803/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1525172827644743680/8mskmqwq_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">CryptanimeNFT | Minting Now</div>
<div style="text-align: center; font-size: 14px;">@cryptanime</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from CryptanimeNFT | Minting Now.
| Data | CryptanimeNFT | Minting Now |
| --- | --- |
| Tweets downloaded | 491 |
| Retweets | 96 |
| Short tweets | 15 |
| Tweets kept | 380 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2066dfxu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cryptanime's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2byq9c2t) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2byq9c2t/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cryptanime')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
CEBaB/bert-base-uncased.CEBaB.absa.exclusive.seed_88 | c79f59439e487f91d658df8885f5acf662292048 | 2022-05-17T18:57:57.000Z | [
"pytorch",
"bert",
"transformers"
]
| null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.absa.exclusive.seed_88 | 12 | null | transformers | 10,779 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.absa.exclusive.seed_99 | 64b6fb17e67b411cff4fceea3276b71aa68f5cbd | 2022-05-17T19:02:40.000Z | [
"pytorch",
"bert",
"transformers"
]
| null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.absa.exclusive.seed_99 | 12 | null | transformers | 10,780 | Entry not found |
NFflow/healthcare_27.03.2021-27.03.2022_redditflow | e3bafb55f51bc5e44eb63b548524d83244f803d4 | 2022-05-21T06:41:02.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | NFflow | null | NFflow/healthcare_27.03.2021-27.03.2022_redditflow | 12 | null | sentence-transformers | 10,781 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
inference: false
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.losses.ContrastiveTensionLoss.ContrastiveTensionDataLoader` of length 542 with parameters:
```
{'batch_size': 16}
```
**Loss**:
`sentence_transformers.losses.ContrastiveTensionLoss.ContrastiveTensionLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 100000,
"warmup_steps": 55,
"weight_decay": 0
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
huggingtweets/vgdunkey | a9998cf6d149d16c71bc5d7947868b467f79c2e3 | 2022-07-23T05:14:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/vgdunkey | 12 | null | transformers | 10,782 | ---
language: en
thumbnail: http://www.huggingtweets.com/vgdunkey/1658553242358/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/676614171849453568/AZd1Bh-s_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">dunkey</div>
<div style="text-align: center; font-size: 14px;">@vgdunkey</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from dunkey.
| Data | dunkey |
| --- | --- |
| Tweets downloaded | 1283 |
| Retweets | 147 |
| Short tweets | 327 |
| Tweets kept | 809 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/bri0i7s5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vgdunkey's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/o4oh6dvl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/o4oh6dvl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vgdunkey')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ericklerouge123/distilbert-base-uncased-finetuned-emotion | f451c519a6c91b43ac7977bec79013c614e18eeb | 2022-05-20T20:35:39.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | ericklerouge123 | null | ericklerouge123/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,783 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
stplgg/distilbert-base-uncased-finetuned-emotion | 8cc0bf41d29423710a59428c18cf27089850dbdf | 2022-05-20T15:12:48.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | stplgg | null | stplgg/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,784 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9230160877762784
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2229
- Accuracy: 0.923
- F1: 0.9230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8655 | 1.0 | 250 | 0.3228 | 0.907 | 0.9038 |
| 0.2625 | 2.0 | 500 | 0.2229 | 0.923 | 0.9230 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
connectivity/bert_ft_qqp-22 | 1392e2961a6df2515d11065880ef420f163f48ae | 2022-05-21T16:32:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/bert_ft_qqp-22 | 12 | null | transformers | 10,785 | Entry not found |
connectivity/bert_ft_qqp-98 | c77f676310d9736076867ba6c4472055be9224ef | 2022-05-21T16:38:29.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/bert_ft_qqp-98 | 12 | null | transformers | 10,786 | Entry not found |
RaphaelReinauer/mbart50-finetuned-multi30-en-to-de | 8f79a72f046575790c31ce33c2bd00070fccc4b1 | 2022-05-23T22:42:15.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"transformers",
"translation",
"model-index",
"autotrain_compatible"
]
| translation | false | RaphaelReinauer | null | RaphaelReinauer/mbart50-finetuned-multi30-en-to-de | 12 | null | transformers | 10,787 | ---
tags:
- translation
metrics:
- bleu
model-index:
- name: mbart50-finetuned-multi30-en-to-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart50-finetuned-multi30-en-to-de
This model is a fine-tuned version of [facebook/mbart-large-50-one-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-one-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5946
- Bleu: 48.2650
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu113
- Datasets 2.2.2
- Tokenizers 0.10.3
|
krotima1/mbart-ht2a-cs | 8b761742bd3b2346e5198e444e3665f2fd5c6c66 | 2022-05-26T12:59:01.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"cs",
"dataset:private Czech News Center dataset news-based",
"dataset:SumeCzech dataset news-based",
"transformers",
"Summarization",
"abstractive summarization",
"mbart-cc25",
"Czech",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | krotima1 | null | krotima1/mbart-ht2a-cs | 12 | null | transformers | 10,788 | ---
language:
- cs
- cs
tags:
- Summarization
- abstractive summarization
- mbart-cc25
- Czech
license: apache-2.0
datasets:
- private Czech News Center dataset news-based
- SumeCzech dataset news-based
metrics:
- rouge
- rougeraw
---
# mBART fine-tuned model for Czech abstractive summarization (HT2A-CS)
This model is a fine-tuned checkpoint of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the Czech news dataset to produce Czech abstractive summaries.
## Task
The model deals with the task ``Headline + Text to Abstract`` (HT2A) which consists in generating a multi-sentence summary considered as an abstract from a Czech news text.
## Dataset
The model has been trained on a large Czech news dataset developed by a concatenation of two datasets, the private CNC dataset provided by Czech News Center and [SumeCzech](https://ufal.mff.cuni.cz/sumeczech) dataset. The dataset includes around 1.75M Czech news-based documents consisting of a Headline, Abstract, and Full-text sections. Truncation and padding were set to 512 tokens for the encoder and 128 for the decoder.
## Training
The model has been trained on 1x NVIDIA Tesla A100 40GB for 60 hours and 4x NVIDIA Tesla A100 40GB for 40 hours. During training, the model has seen 12896K documents corresponding to roughly 8.4 epochs.
# Use
Assuming that you are using the provided Summarizer.ipynb file.
```python
def summ_config():
cfg = OrderedDict([
# summarization model - checkpoint from website
("model_name", "krotima1/mbart-ht2a-cs"),
("inference_cfg", OrderedDict([
("num_beams", 4),
("top_k", 40),
("top_p", 0.92),
("do_sample", True),
("temperature", 0.89),
("repetition_penalty", 1.2),
("no_repeat_ngram_size", None),
("early_stopping", True),
("max_length", 128),
("min_length", 10),
])),
#texts to summarize
("text",
[
"Input your Czech text",
]
),
])
return cfg
cfg = summ_config()
#load model
model = AutoModelForSeq2SeqLM.from_pretrained(cfg["model_name"])
tokenizer = AutoTokenizer.from_pretrained(cfg["model_name"])
# init summarizer
summarize = Summarizer(model, tokenizer, cfg["inference_cfg"])
summarize(cfg["text"])
``` |
fabraz/distilbert-base-uncased-finetunned-emotion | 84b2dd3b38d87acf34730acefe4999985021c7ec | 2022-05-23T18:39:31.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | fabraz | null | fabraz/distilbert-base-uncased-finetunned-emotion | 12 | null | transformers | 10,789 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetunned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9285
- name: F1
type: f1
value: 0.9284132954244212
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetunned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2102
- Accuracy: 0.9285
- F1: 0.9284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8258 | 1.0 | 250 | 0.3023 | 0.9065 | 0.9037 |
| 0.2414 | 2.0 | 500 | 0.2102 | 0.9285 | 0.9284 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Rgl73/distilbert-base-uncased-finetuned-emotion | 423b86c06f6c5ea1b3e4055219aae26b49eca19a | 2022-06-05T10:40:49.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Rgl73 | null | Rgl73/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,790 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9216592887159751
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2256
- Accuracy: 0.9215
- F1: 0.9217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8556 | 1.0 | 250 | 0.3246 | 0.9075 | 0.9044 |
| 0.2562 | 2.0 | 500 | 0.2256 | 0.9215 | 0.9217 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
sexomq/DialoGPT-medium-TeoBot | e4e710e758eadf51f6eeb62f8f5777195ba28efe | 2022-05-23T20:26:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | sexomq | null | sexomq/DialoGPT-medium-TeoBot | 12 | 1 | transformers | 10,791 | ---
tags:
- conversational
--- |
pkumc/distilbert-base-uncased-finetuned-cola | 7ccea95d005eeb78d71d2c95c54927e5e5d97925 | 2022-05-24T11:43:04.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | pkumc | null | pkumc/distilbert-base-uncased-finetuned-cola | 12 | null | transformers | 10,792 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5175
- eval_matthews_correlation: 0.4847
- eval_runtime: 31.1926
- eval_samples_per_second: 33.437
- eval_steps_per_second: 2.116
- epoch: 2.01
- step: 1073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
joaobarroca/distilbert-base-uncased-finetuned-massive-intent-detection-english | 3b68440a34957a9ccdef5aa07f9f9becb6485b20 | 2022-05-24T17:12:14.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:massive",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | joaobarroca | null | joaobarroca/distilbert-base-uncased-finetuned-massive-intent-detection-english | 12 | null | transformers | 10,793 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-massive-intent-detection-english
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.886684599865501
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-massive-intent-detection-english
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4873
- Accuracy: 0.8867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.5849 | 1.0 | 360 | 1.3826 | 0.7359 |
| 1.0662 | 2.0 | 720 | 0.7454 | 0.8357 |
| 0.5947 | 3.0 | 1080 | 0.5668 | 0.8642 |
| 0.3824 | 4.0 | 1440 | 0.5007 | 0.8770 |
| 0.2649 | 5.0 | 1800 | 0.4829 | 0.8824 |
| 0.1877 | 6.0 | 2160 | 0.4843 | 0.8824 |
| 0.1377 | 7.0 | 2520 | 0.4858 | 0.8834 |
| 0.1067 | 8.0 | 2880 | 0.4924 | 0.8864 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
usama98/arabic_poem_gen | 5de15a71dc14fd4436aafaedf74953c4617b030d | 2022-05-31T16:55:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ar",
"dataset:Arabic Poem Comprehensive Dataset (APCD)",
"transformers",
"license:apache-2.0"
]
| text-generation | false | usama98 | null | usama98/arabic_poem_gen | 12 | null | transformers | 10,794 |
---
language:
- ar
tags:
- text-generation
license: apache-2.0
datasets:
- Arabic Poem Comprehensive Dataset (APCD)
widget:
- text: "عمرو بنِ قُمَيئَة: خَليلَيَّ لا تَستَعجِلا أَن"
---
# GPTPoet: Pre-training GPT2 for Arabic Poetry Language Understanding
<img src="https://huggingface.co/usama98/arabic_poem_gen/resolve/main/6C76C5D6-A4F2-4443-AB2A-278E87B8E33C.png" width="100" align="left"/>
**GPTPoet** is an Arabic pretrained language model based on [OpenAi GPT2 architechture](https://github.com/openai/gpt-2). We use the same GPT2-Base config. More details are available in the Google Colab [https://colab.research.google.com/drive/1kByhyhvA0JUZRKL-XCG0ZEDyAg45w8AW?usp=sharing].
To save computation time the model used pretrained weights from another [model](https://huggingface.co/elgeish/gpt2-medium-arabic-poetry). This allowed us to fine-tune our model on our specific dataset, which to our knowledge was never used in NLP task before.
This is a poem generator that creates poems based on the style of the targeted poet. The model was trained on different poets and their respective poems, and the model's input is the poet's name and a suggestion that the model will strive to develop something that imitates the style of that specific poet.
#
## What's New!
All models are available in the `HuggingFace` model page under the [usama98](https://huggingface.co/usama98/) name. Checkpoints are available in PyTorch.
Our model adds a newly tried capability of NLP models where we don't just try to generate text but one that imitates a specific style. Our dataset contains poetry gathered from different poets, the data was feed to the model during training in with the aim of teaching the model how to structure arabic poetry. The additional step here was to add a poet name at the beginning of each training example. This training strategy allows the model to not only learn how to write poetry but how to the written poetry relates to that specific poet and their style.
# Dataset
The dataset consists of content scraped mainly from الموسوعة الشعرية and الديوان. After merging both, the total number of verses is 1,831,770 poetic verses. Each verse is labeled by its meter, the poet who wrote it, and the age which it was written in. There are 22 meters, 3701 poets and 11 ages: Pre-Islamic, Islamic, Umayyad, Mamluk, Abbasid, Ayyubid, Ottoman, Andalusian, era between Umayyad and Abbasid, Fatimid, and finally the modern age. We are only interested in the 16 classic meters which are attributed to Al-Farahidi, and they comprise the majority of the dataset with a total number around 1.7M verses. It is important to note that the verses diacritic states are not consistent. This means that a verse can carry full, semi diacritics, or it can carry nothing.
- [APCD](https://hci-lab.github.io/LearningMetersPoems/#PCD)
# Preprocessing
It is recommended to apply our preprocessing tokenizer before training/testing on any dataset.
# Contacts
**Usama Zidan**: [Linkedin](https://huggingface.co/elgeish/gpt2-medium-arabic-poetry) | [Github](https://github.com/usama13o) | <[email protected]> | <[email protected]>
|
arcAman07/distilbert-base-uncased-finetuned-emotion | 9f262d260a97df09580f1a20425a410e1510c1ab | 2022-05-25T17:08:01.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | arcAman07 | null | arcAman07/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,795 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240598378254522
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2222
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8294 | 1.0 | 250 | 0.3209 | 0.9025 | 0.9001 |
| 0.2536 | 2.0 | 500 | 0.2222 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
tbosse/bert-base-german-cased-finetuned-subj_v6_7Epoch_v2 | 2c3bc64c72fe2d0f98cc3a7c910cdde0bae5a68b | 2022-05-25T17:48:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tbosse | null | tbosse/bert-base-german-cased-finetuned-subj_v6_7Epoch_v2 | 12 | null | transformers | 10,796 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj_v6_7Epoch_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj_v6_7Epoch_v2
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2860
- Precision: 0.7623
- Recall: 0.7514
- F1: 0.7568
- Accuracy: 0.9061
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 33 | 0.3344 | 0.6846 | 0.5829 | 0.6296 | 0.8635 |
| No log | 2.0 | 66 | 0.2659 | 0.7335 | 0.7 | 0.7164 | 0.8929 |
| No log | 3.0 | 99 | 0.2490 | 0.7493 | 0.7514 | 0.7504 | 0.9090 |
| No log | 4.0 | 132 | 0.2470 | 0.7676 | 0.7457 | 0.7565 | 0.9067 |
| No log | 5.0 | 165 | 0.2669 | 0.7514 | 0.7514 | 0.7514 | 0.9044 |
| No log | 6.0 | 198 | 0.2792 | 0.7564 | 0.7543 | 0.7554 | 0.9067 |
| No log | 7.0 | 231 | 0.2860 | 0.7623 | 0.7514 | 0.7568 | 0.9061 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
tbosse/bert-base-german-cased-finetuned-subj_v6_7Epoch_v3 | 73b1febd1e8d1c7b1cabd6a445e8100c0553daaf | 2022-05-25T19:01:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tbosse | null | tbosse/bert-base-german-cased-finetuned-subj_v6_7Epoch_v3 | 12 | null | transformers | 10,797 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj_v6_7Epoch_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj_v6_7Epoch_v3
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2732
- Precision: 0.7654
- Recall: 0.7829
- F1: 0.7740
- Accuracy: 0.9119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 33 | 0.3281 | 0.6656 | 0.5914 | 0.6263 | 0.8623 |
| No log | 2.0 | 66 | 0.2623 | 0.7440 | 0.7057 | 0.7243 | 0.8940 |
| No log | 3.0 | 99 | 0.2460 | 0.7536 | 0.7514 | 0.7525 | 0.9067 |
| No log | 4.0 | 132 | 0.2440 | 0.7778 | 0.76 | 0.7688 | 0.9124 |
| No log | 5.0 | 165 | 0.2582 | 0.7723 | 0.7657 | 0.7690 | 0.9107 |
| No log | 6.0 | 198 | 0.2681 | 0.7690 | 0.78 | 0.7745 | 0.9119 |
| No log | 7.0 | 231 | 0.2732 | 0.7654 | 0.7829 | 0.7740 | 0.9119 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
sayanmandal/t5-small_6_3-hi_en-to-en | 75cb22e308720d322134d6e89959a45a56220262 | 2022-05-26T11:32:32.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:cmu_hinglish_dog",
"transformers",
"translation",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| translation | false | sayanmandal | null | sayanmandal/t5-small_6_3-hi_en-to-en | 12 | 0 | transformers | 10,798 | ---
tags:
- translation
- generated_from_trainer
datasets:
- cmu_hinglish_dog
metrics:
- bleu
model-index:
- name: t5-small_6_3-hi_en-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cmu_hinglish_dog
type: cmu_hinglish_dog
args: hi_en-en
metrics:
- name: Bleu
type: bleu
value: 18.0863
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small_6_3-hi_en-to-en
This model was trained from scratch on the cmu_hinglish_dog dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3662
- Bleu: 18.0863
- Gen Len: 15.2708
## Model description
Model generated using:<br />
```python make_student.py t5-small t5_small_6_3 6 3```<br />
Check this [link](https://discuss.huggingface.co/t/questions-on-distilling-from-t5/1193/9) for more information.
## Intended uses & limitations
More information needed
## Training and evaluation data
Used cmu_hinglish_dog dataset. Please check this [link](https://huggingface.co/datasets/cmu_hinglish_dog) for dataset description
## Translation:
* Source: hi_en: The text in Hinglish
* Target: en: The text in English
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 126 | 3.0601 | 4.7146 | 11.9904 |
| No log | 2.0 | 252 | 2.8885 | 5.9584 | 12.3418 |
| No log | 3.0 | 378 | 2.7914 | 6.649 | 12.3758 |
| 3.4671 | 4.0 | 504 | 2.7347 | 7.3305 | 12.3854 |
| 3.4671 | 5.0 | 630 | 2.6832 | 8.3132 | 12.4268 |
| 3.4671 | 6.0 | 756 | 2.6485 | 8.339 | 12.3641 |
| 3.4671 | 7.0 | 882 | 2.6096 | 8.7269 | 12.414 |
| 3.0208 | 8.0 | 1008 | 2.5814 | 9.2163 | 12.2675 |
| 3.0208 | 9.0 | 1134 | 2.5542 | 9.448 | 12.3875 |
| 3.0208 | 10.0 | 1260 | 2.5339 | 9.9011 | 12.4321 |
| 3.0208 | 11.0 | 1386 | 2.5043 | 9.7529 | 12.5149 |
| 2.834 | 12.0 | 1512 | 2.4848 | 9.9606 | 12.4193 |
| 2.834 | 13.0 | 1638 | 2.4737 | 9.9368 | 12.3673 |
| 2.834 | 14.0 | 1764 | 2.4458 | 10.3182 | 12.4352 |
| 2.834 | 15.0 | 1890 | 2.4332 | 10.486 | 12.4671 |
| 2.7065 | 16.0 | 2016 | 2.4239 | 10.6921 | 12.414 |
| 2.7065 | 17.0 | 2142 | 2.4064 | 10.7426 | 12.4607 |
| 2.7065 | 18.0 | 2268 | 2.3941 | 11.0509 | 12.4087 |
| 2.7065 | 19.0 | 2394 | 2.3826 | 11.2407 | 12.3386 |
| 2.603 | 20.0 | 2520 | 2.3658 | 11.3711 | 12.3992 |
| 2.603 | 21.0 | 2646 | 2.3537 | 11.42 | 12.5032 |
| 2.603 | 22.0 | 2772 | 2.3475 | 12.0665 | 12.5074 |
| 2.603 | 23.0 | 2898 | 2.3398 | 12.0343 | 12.4342 |
| 2.5192 | 24.0 | 3024 | 2.3298 | 12.1011 | 12.5096 |
| 2.5192 | 25.0 | 3150 | 2.3216 | 12.2562 | 12.4809 |
| 2.5192 | 26.0 | 3276 | 2.3131 | 12.4585 | 12.4427 |
| 2.5192 | 27.0 | 3402 | 2.3052 | 12.7094 | 12.534 |
| 2.4445 | 28.0 | 3528 | 2.2984 | 12.7432 | 12.5053 |
| 2.4445 | 29.0 | 3654 | 2.2920 | 12.8409 | 12.4501 |
| 2.4445 | 30.0 | 3780 | 2.2869 | 12.6365 | 12.4936 |
| 2.4445 | 31.0 | 3906 | 2.2777 | 12.8523 | 12.5234 |
| 2.3844 | 32.0 | 4032 | 2.2788 | 12.9216 | 12.4204 |
| 2.3844 | 33.0 | 4158 | 2.2710 | 12.9568 | 12.5064 |
| 2.3844 | 34.0 | 4284 | 2.2643 | 12.9641 | 12.4299 |
| 2.3844 | 35.0 | 4410 | 2.2621 | 12.9787 | 12.448 |
| 2.3282 | 36.0 | 4536 | 2.2554 | 13.1264 | 12.4374 |
| 2.3282 | 37.0 | 4662 | 2.2481 | 13.1853 | 12.4416 |
| 2.3282 | 38.0 | 4788 | 2.2477 | 13.3259 | 12.4119 |
| 2.3282 | 39.0 | 4914 | 2.2448 | 13.2017 | 12.4278 |
| 2.2842 | 40.0 | 5040 | 2.2402 | 13.3772 | 12.4437 |
| 2.2842 | 41.0 | 5166 | 2.2373 | 13.2184 | 12.414 |
| 2.2842 | 42.0 | 5292 | 2.2357 | 13.5267 | 12.4342 |
| 2.2842 | 43.0 | 5418 | 2.2310 | 13.5754 | 12.4087 |
| 2.2388 | 44.0 | 5544 | 2.2244 | 13.653 | 12.4427 |
| 2.2388 | 45.0 | 5670 | 2.2243 | 13.6028 | 12.431 |
| 2.2388 | 46.0 | 5796 | 2.2216 | 13.7128 | 12.4151 |
| 2.2388 | 47.0 | 5922 | 2.2231 | 13.749 | 12.4172 |
| 2.2067 | 48.0 | 6048 | 2.2196 | 13.7256 | 12.4034 |
| 2.2067 | 49.0 | 6174 | 2.2125 | 13.8237 | 12.396 |
| 2.2067 | 50.0 | 6300 | 2.2131 | 13.6642 | 12.4416 |
| 2.2067 | 51.0 | 6426 | 2.2115 | 13.8876 | 12.4119 |
| 2.1688 | 52.0 | 6552 | 2.2091 | 14.0323 | 12.4639 |
| 2.1688 | 53.0 | 6678 | 2.2082 | 13.916 | 12.3843 |
| 2.1688 | 54.0 | 6804 | 2.2071 | 13.924 | 12.3758 |
| 2.1688 | 55.0 | 6930 | 2.2046 | 13.9563 | 12.4416 |
| 2.1401 | 56.0 | 7056 | 2.2020 | 14.0592 | 12.483 |
| 2.1401 | 57.0 | 7182 | 2.2047 | 13.8879 | 12.4076 |
| 2.1401 | 58.0 | 7308 | 2.2018 | 13.9267 | 12.3949 |
| 2.1401 | 59.0 | 7434 | 2.1964 | 14.0518 | 12.4363 |
| 2.1092 | 60.0 | 7560 | 2.1926 | 14.1518 | 12.4883 |
| 2.1092 | 61.0 | 7686 | 2.1972 | 14.132 | 12.4034 |
| 2.1092 | 62.0 | 7812 | 2.1939 | 14.2066 | 12.4151 |
| 2.1092 | 63.0 | 7938 | 2.1905 | 14.2923 | 12.4459 |
| 2.0932 | 64.0 | 8064 | 2.1932 | 14.2476 | 12.3418 |
| 2.0932 | 65.0 | 8190 | 2.1925 | 14.2057 | 12.3907 |
| 2.0932 | 66.0 | 8316 | 2.1906 | 14.2978 | 12.4055 |
| 2.0932 | 67.0 | 8442 | 2.1903 | 14.3276 | 12.4427 |
| 2.0706 | 68.0 | 8568 | 2.1918 | 14.4681 | 12.4034 |
| 2.0706 | 69.0 | 8694 | 2.1882 | 14.3751 | 12.4225 |
| 2.0706 | 70.0 | 8820 | 2.1870 | 14.5904 | 12.4204 |
| 2.0706 | 71.0 | 8946 | 2.1865 | 14.6409 | 12.4512 |
| 2.0517 | 72.0 | 9072 | 2.1831 | 14.6505 | 12.4352 |
| 2.0517 | 73.0 | 9198 | 2.1835 | 14.7485 | 12.4363 |
| 2.0517 | 74.0 | 9324 | 2.1824 | 14.7344 | 12.4586 |
| 2.0517 | 75.0 | 9450 | 2.1829 | 14.8097 | 12.4575 |
| 2.0388 | 76.0 | 9576 | 2.1822 | 14.6681 | 12.4108 |
| 2.0388 | 77.0 | 9702 | 2.1823 | 14.6421 | 12.4342 |
| 2.0388 | 78.0 | 9828 | 2.1816 | 14.7014 | 12.4459 |
| 2.0388 | 79.0 | 9954 | 2.1810 | 14.744 | 12.4565 |
| 2.0224 | 80.0 | 10080 | 2.1839 | 14.7889 | 12.4437 |
| 2.0224 | 81.0 | 10206 | 2.1793 | 14.802 | 12.4565 |
| 2.0224 | 82.0 | 10332 | 2.1776 | 14.7702 | 12.4214 |
| 2.0224 | 83.0 | 10458 | 2.1809 | 14.6772 | 12.4236 |
| 2.0115 | 84.0 | 10584 | 2.1786 | 14.709 | 12.4214 |
| 2.0115 | 85.0 | 10710 | 2.1805 | 14.7693 | 12.3981 |
| 2.0115 | 86.0 | 10836 | 2.1790 | 14.7628 | 12.4172 |
| 2.0115 | 87.0 | 10962 | 2.1785 | 14.7538 | 12.3992 |
| 2.0007 | 88.0 | 11088 | 2.1788 | 14.7493 | 12.3726 |
| 2.0007 | 89.0 | 11214 | 2.1788 | 14.8793 | 12.4045 |
| 2.0007 | 90.0 | 11340 | 2.1786 | 14.8318 | 12.3747 |
| 2.0007 | 91.0 | 11466 | 2.1769 | 14.8061 | 12.4013 |
| 1.9967 | 92.0 | 11592 | 2.1757 | 14.8108 | 12.3843 |
| 1.9967 | 93.0 | 11718 | 2.1747 | 14.8036 | 12.379 |
| 1.9967 | 94.0 | 11844 | 2.1764 | 14.7447 | 12.3737 |
| 1.9967 | 95.0 | 11970 | 2.1759 | 14.7759 | 12.3875 |
| 1.9924 | 96.0 | 12096 | 2.1760 | 14.7695 | 12.3875 |
| 1.9924 | 97.0 | 12222 | 2.1762 | 14.8022 | 12.3769 |
| 1.9924 | 98.0 | 12348 | 2.1763 | 14.7519 | 12.3822 |
| 1.9924 | 99.0 | 12474 | 2.1760 | 14.7756 | 12.3832 |
| 1.9903 | 100.0 | 12600 | 2.1761 | 14.7713 | 12.3822 |
### Evaluation results
| Data Split | Bleu |
|:----------:|:-------:|
| Validation | 17.8061 |
| Test | 18.0863 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Yah216/Poem_Qafiyah_Detection | 0a6f758cf92894b97c86af3e7cce2e9ec747aaab | 2022-05-28T07:56:56.000Z | [
"pytorch",
"bert",
"text-classification",
"ar",
"dataset:Yah216/Poem_Rawiy_detection",
"transformers",
"co2_eq_emissions"
]
| text-classification | false | Yah216 | null | Yah216/Poem_Qafiyah_Detection | 12 | null | transformers | 10,799 | ---
language: ar
datasets:
- Yah216/Poem_Rawiy_detection
co2_eq_emissions: 1.8046766441629636
widget:
- "سَلو قَلبي غَداةَ سَلا وَثابا لَعَلَّ عَلى الجَمالِ لَهُ عِتاب"
---
# Model
- Problem type: Multi-class Classification
- CO2 Emissions (in grams): 1.8046766441629636
## Dataset
We used the APCD dataset cited hereafter for pretraining the model. The dataset has been cleaned and only the main text and the Qafiyah column were kept:
```
@Article{Yousef2019LearningMetersArabicEnglish-arxiv,
author = {Yousef, Waleed A. and Ibrahime, Omar M. and Madbouly, Taha M. and Mahmoud,
Moustafa A.},
title = {Learning Meters of Arabic and English Poems With Recurrent Neural Networks: a Step
Forward for Language Understanding and Synthesis},
journal = {arXiv preprint arXiv:1905.05700},
year = 2019,
url = {https://github.com/hci-lab/LearningMetersPoems}
}
```
## Validation Metrics
- Loss: 0.398613303899765
- Accuracy: 0.912351981006084
- Macro F1: 0.717311758991278
- Micro F1: 0.912351981006084
- Weighted F1: 0.9110094798809955
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Yah216/Poem_Rawiy_detection
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Yah216/Poem_Qafiyah_Detection", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Yah216/Poem_Qafiyah_Detection", use_auth_token=True)
inputs = tokenizer("text, return_tensors="pt")
outputs = model(**inputs)
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.