repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Prem11100/donut-base-Label-studio
|
Prem11100
|
vision-encoder-decoder
| 14 | 3 |
transformers
| 0 | null | true | false | false |
mit
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 989 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-Label-studio
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
1350e397a200a50f60b8fe5b4768ddba
|
GItaf/gpt2-finetuned-mbti-0901
|
GItaf
|
gpt2
| 12 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,326 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-mbti-0901
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.1073 | 1.0 | 9906 | 4.0111 |
| 4.0302 | 2.0 | 19812 | 3.9761 |
| 3.9757 | 3.0 | 29718 | 3.9578 |
| 3.9471 | 4.0 | 39624 | 3.9495 |
| 3.9187 | 5.0 | 49530 | 3.9470 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
d51b4f6e17cd94e3c54e2ed029adf1ad
|
paola-md/distilr-lr1e05-wd0.02-bs32
|
paola-md
|
roberta
| 6 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,674 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilr-lr1e05-wd0.02-bs32
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2742
- Rmse: 0.5236
- Mse: 0.2742
- Mae: 0.4137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2775 | 1.0 | 623 | 0.2735 | 0.5230 | 0.2735 | 0.4187 |
| 0.2738 | 2.0 | 1246 | 0.2727 | 0.5222 | 0.2727 | 0.4120 |
| 0.2722 | 3.0 | 1869 | 0.2728 | 0.5223 | 0.2728 | 0.4168 |
| 0.2701 | 4.0 | 2492 | 0.2751 | 0.5245 | 0.2751 | 0.4000 |
| 0.2684 | 5.0 | 3115 | 0.2770 | 0.5263 | 0.2770 | 0.4236 |
| 0.2668 | 6.0 | 3738 | 0.2742 | 0.5236 | 0.2742 | 0.4137 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ea1f36b266a8411351c41bba1fbf7fb4
|
google/multiberts-seed_4-step_1500k
|
google
|
bert
| 8 | 12 |
transformers
| 0 | null | true | true | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['multiberts', 'multiberts-seed_4', 'multiberts-seed_4-step_1500k']
| false | true | true | 3,527 | false |
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 1500k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #4, captured at step 1500k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_1500k')
model = TFBertModel.from_pretrained("google/multiberts-seed_4-step_1500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_1500k')
model = BertModel.from_pretrained("google/multiberts-seed_4-step_1500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
9121bc1eacac65b5fb8e8750749743af
|
ken11/bert-japanese-ner
|
ken11
|
bert
| 4 | 2,529 |
transformers
| 3 |
token-classification
| true | false | false |
mit
|
['ja']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['ner', 'token-classification', 'japanese', 'bert']
| false | true | true | 1,827 | false |
## bert-japanese-ner
このモデルは日本語の固有表現抽出タスクを目的として、[京都大学 黒橋・褚・村脇研究室が公開しているBERT日本語Pretrainedモデル](https://nlp.ist.i.kyoto-u.ac.jp/?ku_bert_japanese)をベースに[ストックマーク株式会社が公開しているner-wikipedia-dataset](https://github.com/stockmarkteam/ner-wikipedia-dataset)でファインチューニングしたものです。
## How to use
このモデルはTokenizerに上述の京都大学BERT日本語PretrainedモデルのTokenizerを利用します。
当リポジトリにTokenizerは含まれていません。
利用する際は別途ダウンロードしてご用意ください。
また、Tokenizerとは別に[Juman++](https://nlp.ist.i.kyoto-u.ac.jp/?JUMAN%2B%2B)と[pyknp](https://nlp.ist.i.kyoto-u.ac.jp/?PyKNP)を利用します。
予めインストールしておいてください。
```py
from transformers import (
BertForTokenClassification, BertTokenizer
)
from pyknp import Juman
jumanpp = Juman()
tokenizer = BertTokenizer.from_pretrained("ダウンロードした京都大学のTokenizerのファイルパス")
model = BertForTokenClassification.from_pretrained("ken11/bert-japanese-ner")
text = "なにか文章"
juman_result = jumanpp.analysis(text)
tokenized_text = [mrph.midasi for mrph in juman_result.mrph_list()]
inputs = tokenizer(tokenized_text, return_tensors="pt", padding='max_length', truncation=True, max_length=64, is_split_into_words=True)
pred = model(**inputs).logits[0]
pred = np.argmax(pred.detach().numpy(), axis=-1)
labels = []
for i, label in enumerate(pred):
if i + 1 > len(tokenized_text):
continue
labels.append(model.config.id2label[label])
print(f"{tokenized_text[i]}: {model.config.id2label[label]}")
print(tokenized_text)
print(labels)
```
## Training Data
学習には[ストックマーク株式会社が公開しているner-wikipedia-dataset](https://github.com/stockmarkteam/ner-wikipedia-dataset)を利用しました。
便利なデータセットを公開していただきありがとうございます。
## Note
固有表現抽出のラベルは学習データセットのものをBILUO形式に変換して使用しています。
ラベルの詳細については[ner-wikipedia-datasetの概要](https://github.com/stockmarkteam/ner-wikipedia-dataset#%E6%A6%82%E8%A6%81)をご確認ください。
## Licenese
[The MIT license](https://opensource.org/licenses/MIT)
|
0deb65adbd7de32bc944bd97ae5823d9
|
muhtasham/tiny-mlm-glue-rte-target-glue-qnli
|
muhtasham
|
bert
| 10 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,803 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-rte-target-glue-qnli
This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-rte](https://huggingface.co/muhtasham/tiny-mlm-glue-rte) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4684
- Accuracy: 0.7809
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6041 | 0.15 | 500 | 0.5365 | 0.7382 |
| 0.5388 | 0.31 | 1000 | 0.5298 | 0.7435 |
| 0.5182 | 0.46 | 1500 | 0.4987 | 0.7640 |
| 0.5123 | 0.61 | 2000 | 0.5240 | 0.7494 |
| 0.5096 | 0.76 | 2500 | 0.4802 | 0.7761 |
| 0.5026 | 0.92 | 3000 | 0.4708 | 0.7847 |
| 0.4897 | 1.07 | 3500 | 0.4503 | 0.7921 |
| 0.4798 | 1.22 | 4000 | 0.4681 | 0.7825 |
| 0.4659 | 1.37 | 4500 | 0.4770 | 0.7754 |
| 0.4743 | 1.53 | 5000 | 0.4684 | 0.7809 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
8dd90e63737d4ba3da598513ae616986
|
stjiris/bert-large-portuguese-cased-legal-tsdae-gpl-nli-sts-v1
|
stjiris
|
bert
| 16 | 3 |
sentence-transformers
| 1 |
sentence-similarity
| true | false | false |
mit
|
['pt']
|
['stjiris/portuguese-legal-sentences-v0', 'assin', 'assin2', 'stsb_multi_mt', 'stjiris/IRIS_sts']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sentence-transformers', 'transformers', 'bert', 'pytorch', 'sentence-similarity']
| false | true | true | 5,493 | false |
[](https://www.inesc-id.pt/projects/PR07005/)
[](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/)
Work developed as part of [Project IRIS](https://www.inesc-id.pt/projects/PR07005/).
Thesis: [A Semantic Search System for Supremo Tribunal de Justiça](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/)
# stjiris/bert-large-portuguese-cased-legal-tsdae-gpl-nli-sts-v1 (Legal BERTimbau)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
stjiris/bert-large-portuguese-cased-legal-tsdae derives from [BERTimbau](https://huggingface.co/neuralmind/bert-large-portuguese-cased) large.
It was trained using the TSDAE technique with a learning rate 1e-5 [Legal Sentences from +-30000 documents](https://huggingface.co/datasets/stjiris/portuguese-legal-sentences-v1.0) 212k training steps (best performance for our semantic search system implementation)
It was presented to Generative Pseudo Labeling training.
The model was presented to NLI data. 16 batch size, 2e-5 lr
It was trained for Semantic Textual Similarity, being submitted to a fine tuning stage with the [assin](https://huggingface.co/datasets/assin), [assin2](https://huggingface.co/datasets/assin2), [stsb_multi_mt pt](https://huggingface.co/datasets/stsb_multi_mt) and [IRIS STS](https://huggingface.co/datasets/stjiris/IRIS_sts) datasets. 'lr': 1e-5
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Isto é um exemplo", "Isto é um outro exemplo"]
model = SentenceTransformer('stjiris/bert-large-portuguese-cased-legal-tsdae-gpl-nli-sts-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('stjiris/bert-large-portuguese-cased-legal-tsdae-gpl-nli-sts-v1')
model = AutoModel.from_pretrained('stjiris/bert-large-portuguese-cased-legal-tsdae-gpl-nli-sts-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1028, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
If you use this work, please cite:
```bibtex
@inproceedings{MeloSemantic,
author = {Melo, Rui and Santos, Professor Pedro Alexandre and Dias, Professor Jo{\~ a}o},
title = {A {Semantic} {Search} {System} for {Supremo} {Tribunal} de {Justi}{\c c}a},
}
@inproceedings{souza2020bertimbau,
author = {F{\'a}bio Souza and
Rodrigo Nogueira and
Roberto Lotufo},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
@inproceedings{fonseca2016assin,
title={ASSIN: Avaliacao de similaridade semantica e inferencia textual},
author={Fonseca, E and Santos, L and Criscuolo, Marcelo and Aluisio, S},
booktitle={Computational Processing of the Portuguese Language-12th International Conference, Tomar, Portugal},
pages={13--15},
year={2016}
}
@inproceedings{real2020assin,
title={The assin 2 shared task: a quick overview},
author={Real, Livy and Fonseca, Erick and Oliveira, Hugo Goncalo},
booktitle={International Conference on Computational Processing of the Portuguese Language},
pages={406--412},
year={2020},
organization={Springer}
}
@InProceedings{huggingface:dataset:stsb_multi_mt,
title = {Machine translated multilingual STS benchmark dataset.},
author={Philip May},
year={2021},
url={https://github.com/PhilipMay/stsb-multi-mt}
}
```
|
2f8de519b3d21834e9ae6b42b207511b
|
huggingnft/hedgies
|
huggingnft
| null | 5 | 4 |
transformers
| 0 |
unconditional-image-generation
| false | false | false |
mit
| null |
['huggingnft/hedgies']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['huggingnft', 'nft', 'huggan', 'gan', 'image', 'images', 'unconditional-image-generation']
| false | true | true | 2,170 | false |
# Hugging NFT: hedgies
## Disclaimer
All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright
holder.
## Model description
LightWeight GAN model for unconditional generation.
NFT collection available [here](https://opensea.io/collection/hedgies).
Dataset is available [here](https://huggingface.co/datasets/huggingnft/hedgies).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
Project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
[](https://github.com/AlekseyKorshuk/huggingnft)
## Intended uses & limitations
#### How to use
Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
#### Limitations and bias
Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
## Training data
Dataset is available [here](https://huggingface.co/datasets/huggingnft/hedgies).
## Training procedure
Training script is available [here](https://github.com/AlekseyKorshuk/huggingnft).
## Generated Images
Check results with Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
### BibTeX entry and citation info
```bibtex
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
|
72c6c3ca7179c37a6351530b7f288cb8
|
IDEA-CCNL/Taiyi-Diffusion-532M-Cyberpunk-Chinese
|
IDEA-CCNL
| null | 6 | 4 |
transformers
| 4 |
feature-extraction
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'chinese', 'diffusion']
| false | true | true | 2,422 | false |
# Taiyi-Diffusion-532M-Cyberpunk-Chinese
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
该模型由[Katherine Crowson's](https://github.com/openai/guided-diffusion)的无条件扩散模型在1k+张收集的赛博朋克风的图上微调而来。结合[IDEA-CCNL/Taiyi-CLIP-Roberta-large-326M-Chinese](https://huggingface.co/IDEA-CCNL/Taiyi-CLIP-Roberta-large-326M-Chinese)可以实现中文Guided Diffusion的生成方式。
This model is finetune from Katherine Crowson's fine-tuned 512x512 diffusion model (https://github.com/openai/guided-diffusion), using 1k+ cyberpunk-style image crawled from the InterNet. Combine With [IDEA-CCNL/Taiyi-CLIP-Roberta-large-326M-Chinese](https://huggingface.co/IDEA-CCNL/Taiyi-CLIP-Roberta-large-326M-Chinese) can generate image via guided diffusion in Chinese.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 特殊 Special | 多模态 Multimodal | 太乙 Taiyi | Diffusion Model | 532M | Cyberpunk |
## 使用 Usage
使用示例见:https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/disco_project
## 生成示例 Example
| 城市,赛博朋克 | 城市,赛博朋克 |
| ---- | ---- |
|  |  |
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
d57ec8b41d722ee0a48138c3b3c88a66
|
anuragshas/whisper-large-v2-ka
|
anuragshas
|
whisper
| 23 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ka']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,628 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large-v2 Georgian
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 ka dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1187
- Wer: 31.8548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0413 | 2.06 | 200 | 0.0712 | 36.6296 |
| 0.006 | 5.04 | 400 | 0.0899 | 33.7467 |
| 0.0008 | 8.02 | 600 | 0.1039 | 32.2311 |
| 0.0002 | 11.01 | 800 | 0.1141 | 31.9290 |
| 0.0001 | 13.06 | 1000 | 0.1187 | 31.8548 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
ae7f36c25f88c3564b8eacd5eeac665d
|
KedirAhmed/finetuning-sentiment-model-3000-samples
|
KedirAhmed
|
distilbert
| 13 | 9 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,055 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3069
- Accuracy: 0.8667
- F1: 0.8675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
c47c7cbedea3e4110824f81a7c923e80
|
sd-dreambooth-library/homelander
|
sd-dreambooth-library
| null | 28 | 2 |
diffusers
| 3 | null | false | false | false |
mit
| null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,562 | false |
### Homelander on Stable Diffusion via Dreambooth
#### model by Abdifatah
This your the Stable Diffusion model fine-tuned the Homelander concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of homelander guy**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
Here are the images used for training this concept:










|
79e4a8d480204105c6add77510685e20
|
blackopsmaniac1/gwnt
|
blackopsmaniac1
| null | 16 | 2 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 641 | false |
### gwnt Dreambooth model trained by blackopsmaniac1 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
MAGIC WORDS: "gwnt style"
Sample pictures of this concept:
|
a3e85fb33fdd955954b06fb6119e9f61
|
espnet/GunnarThor_talromur_b_tacotron2
|
espnet
| null | 18 | 3 |
espnet
| 0 |
text-to-speech
| false | false | false |
cc-by-4.0
|
['en']
|
['talromur']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'text-to-speech']
| false | true | true | 6,091 | false |
## ESPnet2 TTS model
### `espnet/GunnarThor_talromur_b_tacotron2`
This model was trained by Gunnar Thor using talromur recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 49a284e69308d81c142b89795de255b4ce290c54
pip install -e .
cd egs2/talromur/tts1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/GunnarThor_talromur_b_tacotron2
```
## TTS config
<details><summary>expand</summary>
```
config: ./conf/tuning/train_tacotron2.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/b/tts_train_tacotron2_raw_phn_none
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 55403
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 500
batch_size: 20
valid_batch_size: null
batch_bins: 2560000
valid_batch_bins: null
train_shape_file:
- exp/b/tts_stats_raw_phn_none/train/text_shape.phn
- exp/b/tts_stats_raw_phn_none/train/speech_shape
valid_shape_file:
- exp/b/tts_stats_raw_phn_none/valid/text_shape.phn
- exp/b/tts_stats_raw_phn_none/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_b_phn/text
- text
- text
- - dump/raw/train_b_phn/wav.scp
- speech
- sound
valid_data_path_and_name_and_type:
- - dump/raw/dev_b_phn/text
- text
- text
- - dump/raw/dev_b_phn/wav.scp
- speech
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-06
weight_decay: 0.0
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ','
- .
- r
- t
- n
- a0
- s
- I0
- D
- l
- Y0
- m
- v
- h
- E1
- k
- a:1
- j
- E:1
- f
- T
- G
- a1
- p
- c
- i:1
- au:1
- O:1
- E0
- I:1
- r_0
- t_h
- I1
- k_h
- Y1
- i0
- ei1
- u:1
- ou:1
- ei:1
- O1
- N
- l_0
- '91'
- n_0
- ou0
- ai0
- au1
- ou1
- O0
- '9:1'
- ai:1
- ei0
- ai1
- i1
- au0
- c_h
- p_h
- '90'
- C
- x
- u0
- 9i:1
- Y:1
- u1
- 9i1
- J
- N_0
- m_0
- 9i0
- J_0
- Oi1
- Yi0
- Yi1
- Oi0
- au:0
- '9:0'
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 22050
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/b/tts_stats_raw_phn_none/train/feats_stats.npz
tts: tacotron2
tts_conf:
embed_dim: 512
elayers: 1
eunits: 512
econv_layers: 3
econv_chans: 512
econv_filts: 5
atype: location
adim: 512
aconv_chans: 32
aconv_filts: 15
cumulate_att_w: true
dlayers: 2
dunits: 1024
prenet_layers: 2
prenet_units: 256
postnet_layers: 5
postnet_chans: 512
postnet_filts: 5
output_activation: null
use_batch_norm: true
use_concate: true
use_residual: false
dropout_rate: 0.5
zoneout_rate: 0.1
reduction_factor: 1
spk_embed_dim: null
use_masking: true
bce_pos_weight: 5.0
use_guided_attn_loss: true
guided_attn_loss_sigma: 0.4
guided_attn_loss_lambda: 1.0
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
c665656ba4b8da536cb48f22567fbe71
|
SIKU-BERT/sikubert
|
SIKU-BERT
|
bert
| 7 | 1,071 |
transformers
| 3 |
fill-mask
| true | false | false |
apache-2.0
|
['zh']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['chinese', 'classical chinese', 'literary chinese', 'ancient chinese', 'bert', 'roberta', 'pytorch']
| false | true | true | 1,318 | false |
# SikuBERT
## Model description

Digital humanities research needs the support of large-scale corpus and high-performance ancient Chinese natural language processing tools. The pre-training language model has greatly improved the accuracy of text mining in English and modern Chinese texts. At present, there is an urgent need for a pre-training model specifically for the automatic processing of ancient texts. We used the verified high-quality “Siku Quanshu” full-text corpus as the training set, based on the BERT deep language model architecture, we constructed the SikuBERT and SikuRoBERTa pre-training language models for intelligent processing tasks of ancient Chinese.
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("SIKU-BERT/sikubert")
model = AutoModel.from_pretrained("SIKU-BERT/sikubert")
```
## About Us
We are from Nanjing Agricultural University.
> Created with by SIKU-BERT [](https://github.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing)
|
0accdfe0c315b9ff4b8ab430a5d81e59
|
michiyasunaga/LinkBERT-base
|
michiyasunaga
|
bert
| 8 | 252 |
transformers
| 4 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['wikipedia', 'bookcorpus']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['bert', 'exbert', 'linkbert', 'feature-extraction', 'fill-mask', 'question-answering', 'text-classification', 'token-classification']
| false | true | true | 3,320 | false |
## LinkBERT-base
LinkBERT-base model pretrained on English Wikipedia articles along with hyperlink information. It is introduced in the paper [LinkBERT: Pretraining Language Models with Document Links (ACL 2022)](https://arxiv.org/abs/2203.15827). The code and data are available in [this repository](https://github.com/michiyasunaga/LinkBERT).
## Model description
LinkBERT is a transformer encoder (BERT-like) model pretrained on a large corpus of documents. It is an improvement of BERT that newly captures **document links** such as hyperlinks and citation links to include knowledge that spans across multiple documents. Specifically, it was pretrained by feeding linked documents into the same language model context, besides a single document.
LinkBERT can be used as a drop-in replacement for BERT. It achieves better performance for general language understanding tasks (e.g. text classification), and is also particularly effective for **knowledge-intensive** tasks (e.g. question answering) and **cross-document** tasks (e.g. reading comprehension, document retrieval).
## Intended uses & limitations
The model can be used by fine-tuning on a downstream task, such as question answering, sequence classification, and token classification.
You can also use the raw model for feature extraction (i.e. obtaining embeddings for input text).
### How to use
To use the model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('michiyasunaga/LinkBERT-base')
model = AutoModel.from_pretrained('michiyasunaga/LinkBERT-base')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
For fine-tuning, you can use [this repository](https://github.com/michiyasunaga/LinkBERT) or follow any other BERT fine-tuning codebases.
## Evaluation results
When fine-tuned on downstream tasks, LinkBERT achieves the following results.
**General benchmarks ([MRQA](https://github.com/mrqa/MRQA-Shared-Task-2019) and [GLUE](https://gluebenchmark.com/)):**
| | HotpotQA | TriviaQA | SearchQA | NaturalQ | NewsQA | SQuAD | GLUE |
| ---------------------- | -------- | -------- | -------- | -------- | ------ | ----- | -------- |
| | F1 | F1 | F1 | F1 | F1 | F1 | Avg score |
| BERT-base | 76.0 | 70.3 | 74.2 | 76.5 | 65.7 | 88.7 | 79.2 |
| **LinkBERT-base** | **78.2** | **73.9** | **76.8** | **78.3** | **69.3** | **90.1** | **79.6** |
| BERT-large | 78.1 | 73.7 | 78.3 | 79.0 | 70.9 | 91.1 | 80.7 |
| **LinkBERT-large** | **80.8** | **78.2** | **80.5** | **81.0** | **72.6** | **92.7** | **81.1** |
## Citation
If you find LinkBERT useful in your project, please cite the following:
```bibtex
@InProceedings{yasunaga2022linkbert,
author = {Michihiro Yasunaga and Jure Leskovec and Percy Liang},
title = {LinkBERT: Pretraining Language Models with Document Links},
year = {2022},
booktitle = {Association for Computational Linguistics (ACL)},
}
```
|
0ee24190ca0bbd9c56be4bf1094009e4
|
fathyshalab/massive_play-roberta-large-v1-5-71
|
fathyshalab
|
roberta
| 14 | 2 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,458 | false |
# fathyshalab/massive_play-roberta-large-v1-5-71
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_play-roberta-large-v1-5-71")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
a35fa2c96ce75a755d40110391633ae3
|
Qilex/VirtualPetDiffusion2
|
Qilex
| null | 30 | 4 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['Qilex/private_guys']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,363 | false |
# VirtualPetDiffusion2
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on a dataset of roughly 8,000 virtual pet thumbnail images.
## Intended uses & limitations
This model can be used to generate small (128x128) virtual pet-like thumbnails.
The pets are generally somewhat abstract.
#### How to use
```python
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("Qilex/VirtualPetDiffusion2")
image = pipeline()["sample"][0]
#this line only works in jupyter
display(image)
```
## Training data
This model was trained on roughly 8,000 virtual pet thumbnail images (80x80px).
The data was randomly flipped, rotated, and perspected using torchvision transforms to prevent some of the issues from the first VirtualPetDiffusion.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: no
### Training results
📈 [TensorBoard logs](https://huggingface.co/Qilex/VirtualPetDiffusion2/tensorboard?#scalars)
|
e52d7d2ff7ceb55269cb5482c3e1a4f6
|
heyyai/mikao00
|
heyyai
| null | 32 | 5 |
diffusers
| 0 |
text-to-image
| false | false | false |
mit
| null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 2,982 | false |
### mikao00 on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### model by cormacncheese
This your the Stable Diffusion model fine-tuned the mikao00 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt(s)`: **mikao000(1).JPG, mikao000(2).JPG, mikao000(3).JPG, mikao000(4).JPG, mikao000(5).png, mikao000(6).png, mikao000(7).JPG, mikao000(8).JPG, mikao000(9).JPG, mikao000(10).JPG, mikao000(12).JPG, mikao000(13).JPG, mikao000(14).jpeg**
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:
mikao000(14).jpeg
mikao000(13).JPG
mikao000(12).JPG
mikao000(10).JPG
mikao000(9).JPG
mikao000(8).JPG
mikao000(7).JPG
mikao000(6).png
mikao000(5).png
mikao000(4).JPG
mikao000(3).JPG
mikao000(2).JPG
mikao000(1).JPG
.JPG)
.JPG)
.JPG)
.JPG)
.png)
.png)
.JPG)
.JPG)
.JPG)
.JPG)
.JPG)
.JPG)
.jpeg)
|
42a2fc174cd7f2bd915cd975382e88b8
|
IDEA-CCNL/Erlangshen-MegatronBert-1.3B-Similarity
|
IDEA-CCNL
|
megatron-bert
| 5 | 108 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['zh']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['bert', 'NLU', 'Similarity']
| false | true | true | 3,561 | false |
# Erlangshen-MegatronBert-1.3B-Similarity
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
2021年登顶FewCLUE和ZeroCLUE的中文BERT,在数个相似度任务上微调后的版本
This is the fine-tuned version of the Chinese BERT model on several similarity datasets, which topped FewCLUE and ZeroCLUE benchmark in 2021
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | MegatronBert | 1.3B | 相似度 Similarity |
## 模型信息 Model Information
基于[Erlangshen-MegatronBert-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B),我们在收集的20个中文领域的改写数据集,总计2773880个样本上微调了一个Similarity版本。
Based on [Erlangshen-MegatronBert-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B), we fine-tuned a similarity version on 20 Chinese paraphrase datasets, with totaling 2,773,880 samples.
### 下游效果 Performance
| Model | BQ | BUSTM | AFQMC |
| :--------: | :-----: | :----: | :-----: |
| Erlangshen-Roberta-110M-Similarity | 85.41 | 95.18 | 81.72 |
| Erlangshen-Roberta-330M-Similarity | 86.21 | 99.29 | 93.89 |
| Erlangshen-MegatronBert-1.3B-Similarity | 86.31 | - | - |
## 使用 Usage
``` python
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizer
import torch
tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-MegatronBert-1.3B-Similarity')
model=AutoModelForSequenceClassification.from_pretrained('IDEA-CCNL/Erlangshen-MegatronBert-1.3B-Similarity')
texta='今天的饭不好吃'
textb='今天心情不好'
output=model(torch.tensor([tokenizer.encode(texta,textb)]))
print(torch.nn.functional.softmax(output.logits,dim=-1))
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的对该模型的论文:
If you are using the resource for your work, please cite the our paper for this model:
```text
@article{fengshenbang/erlangshen-megatronbert-sim,
author = {Junjie Wang and
Yuxiang Zhang and
Ping Yang and
Ruyi Gan},
title = {Towards No.1 in {CLUE} Semantic Matching Challenge: Pre-trained Language
Model Erlangshen with Propensity-Corrected Loss},
journal = {CoRR},
volume = {abs/2208.02959},
year = {2022}
}
```
如果您在您的工作中使用了我们的模型,也可以引用我们的[总论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [overview paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
b40022fa74ed5c29ed2758fa84af62d1
|
Thilinameths/thilinamethsahann
|
Thilinameths
| null | 18 | 5 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 431 | false |
### thilinamethsahann Dreambooth model trained by Thilinameths with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
2e94520c0a21d139eb49c7add795f3a6
|
ALM/whisper-it-small-augmented
|
ALM
|
whisper
| 21 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['it']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 2,217 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Italian - Robust
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 it dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1980
- Wer: 8.7457
**IMPORTANT** The model has been trained using *data augmentation* to improve its generalization capabilities and robustness. The results on the eval set during training are biased towards data augmentation applied to evaluation data.
**Results on eval set**
- Mozilla CV 11.0 - Italian: 8.00 wer (using official script)
- TODO
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 25000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.1927 | 1.0 | 2500 | 0.2506 | 14.9991 |
| 0.0736 | 2.01 | 5000 | 0.2258 | 12.7864 |
| 0.0413 | 3.01 | 7500 | 0.2144 | 11.4508 |
| 0.0201 | 4.02 | 10000 | 0.2146 | 10.8774 |
| 0.0129 | 5.02 | 12500 | 0.2127 | 10.6920 |
| 0.0091 | 6.03 | 15000 | 0.2117 | 10.2867 |
| 0.0043 | 7.03 | 17500 | 0.2076 | 9.6860 |
| 0.0018 | 8.04 | 20000 | 0.2065 | 9.4235 |
| 0.0013 | 9.04 | 22500 | 0.2003 | 8.9105 |
| 0.0009 | 10.05 | 25000 | 0.1978 | 8.7497 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
84ff3aac62e9b1bd81fa6e5272a56341
|
sgangireddy/whisper-small-fi-full
|
sgangireddy
|
whisper
| 22 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['fi']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,783 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Finnish all
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 fi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5334
- Wer: 25.4333
## Model description
The Model is fine-tuned for 5000 steps/updates on CV11 Finnish train+valiation data.
- Zero-shot - 30.5 (CV9 test data, even on CV11 the WER is closer a bit higher than this)
- Fine-tuned - 25.43 (CV11 test data)
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0025 | 19.0 | 1000 | 0.4265 | 24.8493 |
| 0.0005 | 38.0 | 2000 | 0.4785 | 25.3203 |
| 0.0003 | 57.01 | 3000 | 0.5073 | 25.3956 |
| 0.0002 | 76.01 | 4000 | 0.5253 | 25.4333 |
| 0.0002 | 96.0 | 5000 | 0.5334 | 25.4333 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
a587461a7f8be5bcd23428827aa44cc9
|
Helsinki-NLP/opus-mt-kqn-fr
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-kqn-fr
* source languages: kqn
* target languages: fr
* OPUS readme: [kqn-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kqn-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kqn-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kqn-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kqn-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.kqn.fr | 23.2 | 0.400 |
|
eadf4600f9938e4586a6cb94c4008926
|
yhavinga/ul2-base-nl36-dutch
|
yhavinga
|
t5
| 33 | 35 |
transformers
| 0 |
text2text-generation
| true | false | true |
apache-2.0
|
['nl']
|
['yhavinga/mc4_nl_cleaned', 'yhavinga/nedd_wiki_news']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['dutch', 't5', 't5x', 'ul2', 'seq2seq']
| false | true | true | 11,053 | false |
# ul2-base-nl36-dutch for Dutch
Pretrained T5 model on Dutch using a UL2 (Mixture-of-Denoisers) objective.
The T5 model was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
The UL2 objective was introduced in
[this paper](https://arxiv.org/abs/2205.05131)
and first released at [this page](https://github.com/google-research/google-research/tree/master/ul2).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on
a specific downstream task to be useful in practice.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
`ul2-base-nl36-dutch` T5 is a transformers model pretrained on a very large corpus of
Dutch data in a self-supervised fashion.
This means it was pretrained on the raw texts only, with no humans labelling them in any way
(which is why it can use lots of publicly available data) with an automatic process to generate
inputs and outputs from those texts.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in the feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off during pre-training. Dropout should be re-enabled during fine-tuning
- Pre-trained on self-supervised objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
The "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686) were also applied,
which suggests that a Deep-Narrow model architecture is favorable for downstream performance compared to other model
architectures of similar parameter count. Specifically, the model depth is defined as the number of transformer blocks
that are stacked sequentially.
This model uses the [t5-efficient-base-nl36](https://huggingface.co/google/t5-efficient-base-nl36) architecture's
layer depth, which means both the encoder and the decoder have 36 transformer layers compared to the original T5 "base"
model's architecture of 12 transformer layers.
### UL2 pretraining objective
This model was pretrained with the UL2's Mixture-of-Denoisers (MoD) objective, that combines diverse pre-training
paradigms together. UL2 frames different objective functions for training language models as denoising tasks, where
the model has to recover missing sub-sequences of a given input. During pre-training it uses a novel mixture-of-denoisers
that samples from a varied set of such objectives, each with different configurations. UL2 is trained using a mixture of
three denoising tasks:
1. R-denoising (or regular span corruption), which emulates the standard T5 span corruption objective;
2. X-denoising (or extreme span corruption); and
3. S-denoising (or sequential PrefixLM).
During pre-training, we sample from the available denoising tasks based on user-specified ratios.
UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training
denoising task. During the pre-training, a paradigm token is inserted to the input
(`[NLU]` for R-denoising, `[NLG]` for X-denoising, or `[S2S]` for S-denoising) indicating the denoising task at hand.
Then, during fine-tuning the same input token should be inserted to get the best performance for different downstream
fine-tuning tasks.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training.
Therefore, this model has to be fine-tuned before it is usable on a downstream task,
like text classification, unlike the Google's original T5 model.
**Note:** You most likely need to fine-tune these T5/UL2 models without mixed precision
so fine-tune them with full fp32 precision. Fine-tuning with Flax in bf16 - `model.to_bf16()` - is possible
if you set the mask correctly to exclude layernorm and embedding layers. Also note that the T5x pre-training
and fine-tuning configs set `z_loss` to 1e-4, which is used to keep the loss scale from underflowing.
You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
**Note**: For fine-tuning, most likely you can get better results if you insert a prefix token
of `[NLU]`, `[NLG]`, or `[S2S]` to your input texts.
For general language understanding fine-tuning tasks, you could use the `[NLU]` token.
For GPT-style causal language generation, you could use the `[S2S]` token.
The token `[NLG]` of the X-denoising pretrain task is somewhat mix between the language understanding and causal language
generation so the token `[NLG]` could maybe be used for language generation fine-tuning too.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("yhavinga/ul2-base-nl36-dutch", use_fast=False)
model = T5ForConditionalGeneration.from_pretrained("yhavinga/ul2-base-nl36-dutch")
```
and in Flax:
```python
from transformers import T5Tokenizer, FlaxT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("yhavinga/ul2-base-nl36-dutch", use_fast=False)
model = FlaxT5ForConditionalGeneration.from_pretrained("yhavinga/ul2-base-nl36-dutch")
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral.
Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
The `ul2-base-nl36-dutch` T5 model was pre-trained simultaneously on a combination of several datasets,
including the full version of the "mc4_nl_cleaned" dataset, which is a cleaned version of Common Crawl's web
crawl corpus, Dutch books, the Dutch subset of Wikipedia (2022-03-20), and a subset of "mc4_nl_cleaned"
containing only texts from Dutch and Belgian newspapers. This last dataset is oversampled to bias the model
towards descriptions of events in the Netherlands and Belgium.
## Training procedure
### Preprocessing
The ul2-base-nl36-dutch T5 model uses a SentencePiece unigram tokenizer with a vocabulary of 32,000 tokens.
The tokenizer includes the special tokens `<pad>`, `</s>`, `<unk>`, known from the original T5 paper,
`[NLU]`, `[NLG]` and `[S2S]` for the MoD pre-training, and `<n>` for newline.
During pre-training with the UL2 objective, input and output sequences consist of 512 consecutive tokens.
The tokenizer does not lowercase texts and is therefore case-sensitive; it distinguises
between `dutch` and `Dutch`.
Additionally, 100+28 extra tokens were added for pre-training tasks, resulting in a total of 32,128 tokens.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/),
for 2000000 steps with a batch size of 64
(in total 65 B tokens).
The optimizer used was AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2,
and then an inverse square root decay (exponential decay) of the learning rate after.
The model was trained with Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) with help
from [Stephenn Fernandes](https://huggingface.co/StephennFernandes) to get started writing task definitions that wrap
HF datasets.
The UL2 training objective code used with the [t5x framework](https://github.com/google-research/t5x) was copied and
slightly modified from the [UL2 paper](https://arxiv.org/pdf/2205.05131.pdf) appendix chapter 9.2 by the authors
of the Finnish ul2 models. Used UL2 objective code is available in the repository
[Finnish-NLP/ul2-base-nl36-finnish](https://huggingface.co/Finnish-NLP/ul2-base-nl36-finnish) in the files `ul2_objective.py` and `tasks.py`.
UL2's mixture-of-denoisers configuration was otherwise equal to the UL2 paper
but for the rate of mixing denoisers, 20% for S-denoising was used (suggested at the paper chapter 4.5)
and the rest was divided equally between the R-denoising and X-denoising (i.e. 40% for both).
### Model list
Models in this series:
| | ul2-base-dutch | ul2-base-nl36-dutch | ul2-large-dutch | ul2-small-dutch |
|:---------------------|:---------------------|:----------------------|:---------------------|:---------------------|
| model_type | t5 | t5 | t5 | t5 |
| _pipeline_tag | text2text-generation | text2text-generation | text2text-generation | text2text-generation |
| d_model | 768 | 768 | 1024 | 512 |
| d_ff | 2048 | 3072 | 2816 | 1024 |
| num_heads | 12 | 12 | 16 | 6 |
| d_kv | 64 | 64 | 64 | 64 |
| num_layers | 12 | 36 | 24 | 8 |
| num_decoder_layers | 12 | 36 | 24 | 8 |
| feed_forward_proj | gated-gelu | gated-gelu | gated-gelu | gated-gelu |
| dense_act_fn | gelu_new | gelu_new | gelu_new | gelu_new |
| vocab_size | 32128 | 32128 | 32128 | 32128 |
| tie_word_embeddings | 0 | 0 | 0 | 0 |
| torch_dtype | float32 | float32 | float32 | float32 |
| _gin_batch_size | 128 | 64 | 64 | 128 |
| _gin_z_loss | 0.0001 | 0.0001 | 0.0001 | 0.0001 |
| _gin_t5_config_dtype | 'bfloat16' | 'bfloat16' | 'bfloat16' | 'bfloat16' |
## Evaluation results
See the evaluation section in the interactive [Pre-training Dutch T5 Models](https://huggingface.co/spaces/yhavinga/pre-training-dutch-t5-models) blog.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
Thanks to the [Finnish-NLP](https://huggingface.co/Finnish-NLP) authors for releasing their code for the UL2 objective and associated task definitions.
Thanks to [Stephenn Fernandes](https://huggingface.co/StephennFernandes) for helping me get started with the t5x framework.
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
|
2125838d98e564cc33d140df82429210
|
Helsinki-NLP/opus-mt-hil-en
|
Helsinki-NLP
|
marian
| 10 | 14 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-hil-en
* source languages: hil
* target languages: en
* OPUS readme: [hil-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hil-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/hil-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hil-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hil-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.hil.en | 49.2 | 0.638 |
|
b9980ea841b5fbed7276c00a89f535a8
|
jhaochenz/finetuned_gpt2-large_sst2_negation0.001_pretrainedTrue_epochs2
|
jhaochenz
|
gpt2
| 14 | 0 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null |
['sst2']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,215 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_gpt2-large_sst2_negation0.001_pretrainedTrue_epochs2
This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0066 | 1.0 | 1322 | 2.9427 |
| 1.5196 | 2.0 | 2644 | 3.0769 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.7.0
- Datasets 2.8.0
- Tokenizers 0.13.2
|
62546fec7f2a6c03371d335c9e5359c8
|
KoichiYasuoka/roberta-base-thai-char
|
KoichiYasuoka
|
roberta
| 8 | 604 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
|
['th']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['thai', 'masked-lm', 'wikipedia']
| false | true | true | 676 | false |
# roberta-base-thai-char
## Model Description
This is a RoBERTa model pre-trained on Thai Wikipedia texts with character-wise embeddings to use BertTokenizerFast. You can fine-tune `roberta-base-thai-char` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-thai-char-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-thai-char-ud-goeswith), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-char")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-thai-char")
```
|
4fdd4b9a172e9de4073b62633553562e
|
rossanez/t5-small-finetuned-de-en-256-nofp16
|
rossanez
|
t5
| 12 | 4 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['wmt14']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,129 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-256-nofp16
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.1234 | 7.7305 | 17.4033 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
37f8f47504f935212b6d5f62101f753d
|
stevhliu/my_awesome_eli5_mlm_model
|
stevhliu
|
roberta
| 14 | 25 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,254 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 20 | 2.2325 |
| No log | 2.0 | 40 | 2.1603 |
| No log | 3.0 | 60 | 2.2368 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
c2f439f69c776ef86a2a5546168a44a1
|
Squirz/phase2
|
Squirz
| null | 19 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 414 | false |
### Phase2 Dreambooth model trained by Squirz with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
2ebf678f0bb865a88057b1796567d5cf
|
jfarmerphd/bert-finetuned-ner
|
jfarmerphd
|
bert
| 12 | 1 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['conll2003']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,518 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0628
- Precision: 0.9338
- Recall: 0.9490
- F1: 0.9413
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0893 | 1.0 | 1756 | 0.0765 | 0.9136 | 0.9275 | 0.9205 | 0.9796 |
| 0.0334 | 2.0 | 3512 | 0.0627 | 0.9297 | 0.9480 | 0.9388 | 0.9858 |
| 0.0167 | 3.0 | 5268 | 0.0628 | 0.9338 | 0.9490 | 0.9413 | 0.9862 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
4694fe66eb1d200c7eed896ae4979744
|
sd-concepts-library/type
|
sd-concepts-library
| null | 10 | 0 | null | 1 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,090 | false |
### type on Stable Diffusion
This is the `<typeface>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
84e5d4d526f1ed08e9e87e619970ddf3
|
yanaiela/roberta-base-epoch_19
|
yanaiela
|
roberta
| 9 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
|
['en']
|
['wikipedia', 'bookcorpus']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['roberta-base', 'roberta-base-epoch_19']
| false | true | true | 2,102 | false |
# RoBERTa, Intermediate Checkpoint - Epoch 19
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_19.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
e9f13db04477ad4cf60c208ef0df28d1
|
LeKazuha/distilbert-base-uncased-finetuned-squad
|
LeKazuha
|
distilbert
| 27 | 3 |
transformers
| 0 |
question-answering
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,874 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# LeKazuha/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1946
- Train End Logits Accuracy: 0.6778
- Train Start Logits Accuracy: 0.6365
- Validation Loss: 1.1272
- Validation End Logits Accuracy: 0.6948
- Validation Start Logits Accuracy: 0.6569
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.1946 | 0.6778 | 0.6365 | 1.1272 | 0.6948 | 0.6569 | 0 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.7.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
fe3d0c1f926f59e7a83a879bb118ecef
|
carblacac/xlm-roberta-base-finetuned-panx-de
|
carblacac
|
xlm-roberta
| 11 | 8 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 934 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
5c70fa1ca9b51081388293bfeb9bb084
|
jonatasgrosman/exp_w2v2r_es_vp-100k_accent_surpeninsular-2_nortepeninsular-8_s578
|
jonatasgrosman
|
wav2vec2
| 10 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['es']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'es']
| false | true | true | 516 | false |
# exp_w2v2r_es_vp-100k_accent_surpeninsular-2_nortepeninsular-8_s578
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
f608e8e108a587274def6ebe72823248
|
google/t5-efficient-base-nl16
|
google
|
t5
| 12 | 7 |
transformers
| 0 |
text2text-generation
| true | true | true |
apache-2.0
|
['en']
|
['c4']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['deep-narrow']
| false | true | true | 6,253 | false |
# T5-Efficient-BASE-NL16 (Deep-Narrow version)
T5-Efficient-BASE-NL16 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-nl16** - is of model type **Base** with the following variations:
- **nl** is **16**
It has **289.02** million parameters and thus requires *ca.* **1156.07 MB** of memory in full precision (*fp32*)
or **578.03 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
76070771c94c3aae7f866ee2677ef229
|
mathew/layoutlmv2-finetuned-funsd-1024
|
mathew
|
layoutlmv2
| 8 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
cc-by-sa-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,047 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-funsd-1024
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.0+cu101
- Datasets 1.14.0
- Tokenizers 0.10.3
|
73d2c5a06f1bcc230bf0db5563d2660b
|
jordyvl/udpos28-sm-all-POS
|
jordyvl
|
bert
| 13 | 5 |
transformers
| 1 |
token-classification
| true | false | false |
apache-2.0
| null |
['udpos28']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,554 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# udpos28-sm-all-POS
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the udpos28 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1479
- Precision: 0.9587
- Recall: 0.9589
- F1: 0.9588
- Accuracy: 0.9648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1261 | 1.0 | 4978 | 0.1358 | 0.9513 | 0.9510 | 0.9512 | 0.9581 |
| 0.0788 | 2.0 | 9956 | 0.1326 | 0.9578 | 0.9578 | 0.9578 | 0.9642 |
| 0.0424 | 3.0 | 14934 | 0.1479 | 0.9587 | 0.9589 | 0.9588 | 0.9648 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
1d314479acef019c24a168a4208f80b0
|
jamie613/distilbert-base-uncased-distilled-clinc
|
jamie613
|
distilbert
| 10 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,772 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1291
- Accuracy: 0.9429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2296 | 1.0 | 318 | 0.8290 | 0.7571 |
| 0.6433 | 2.0 | 636 | 0.4200 | 0.8961 |
| 0.3495 | 3.0 | 954 | 0.2493 | 0.9206 |
| 0.2254 | 4.0 | 1272 | 0.1835 | 0.9335 |
| 0.1726 | 5.0 | 1590 | 0.1576 | 0.9371 |
| 0.1467 | 6.0 | 1908 | 0.1442 | 0.9423 |
| 0.1318 | 7.0 | 2226 | 0.1360 | 0.9426 |
| 0.1229 | 8.0 | 2544 | 0.1323 | 0.9435 |
| 0.1185 | 9.0 | 2862 | 0.1299 | 0.9426 |
| 0.1151 | 10.0 | 3180 | 0.1291 | 0.9429 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Tokenizers 0.12.1
|
ecb5623a037fdb494179e5570fee5ef8
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa
|
CAMeL-Lab
|
bert
| 9 | 7 |
transformers
| 0 |
token-classification
| true | true | false |
apache-2.0
|
['ar']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,778 | false |
# CAMeLBERT-MSA POS-MSA Model
## Model description
**CAMeLBERT-MSA POS-MSA Model** is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [PATB](https://dl.acm.org/doi/pdf/10.5555/1621804.1621808) dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA POS-MSA model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa')
>>> text = 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع'
>>> pos(text)
[{'entity': 'noun', 'score': 0.9999764, 'index': 1, 'word': 'إمارة', 'start': 0, 'end': 5}, {'entity': 'noun_prop', 'score': 0.99991846, 'index': 2, 'word': 'أبوظبي', 'start': 6, 'end': 12}, {'entity': 'pron', 'score': 0.9998356, 'index': 3, 'word': 'هي', 'start': 13, 'end': 15}, {'entity': 'noun', 'score': 0.99368894, 'index': 4, 'word': 'إحدى', 'start': 16, 'end': 20}, {'entity': 'noun', 'score': 0.9999426, 'index': 5, 'word': 'إما', 'start': 21, 'end': 24}, {'entity': 'noun', 'score': 0.9999339, 'index': 6, 'word': '##رات', 'start': 24, 'end': 27}, {'entity': 'noun', 'score': 0.99996775, 'index': 7, 'word': 'دولة', 'start': 28, 'end': 32}, {'entity': 'noun', 'score': 0.99996895, 'index': 8, 'word': 'الإمارات', 'start': 33, 'end': 41}, {'entity': 'adj', 'score': 0.99990183, 'index': 9, 'word': 'العربية', 'start': 42, 'end': 49}, {'entity': 'adj', 'score': 0.9999347, 'index': 10, 'word': 'المتحدة', 'start': 50, 'end': 57}, {'entity': 'noun_num', 'score': 0.99931145, 'index': 11, 'word': 'السبع', 'start': 58, 'end': 63}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
|
7767d2d7f8cec67bd5fc3dc55e318c2a
|
pjcordero04/distilbert-base-uncased-finetuned-cola
|
pjcordero04
|
distilbert
| 14 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,571 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8348
- Matthews Correlation: 0.5443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5236 | 1.0 | 535 | 0.5495 | 0.4205 |
| 0.3505 | 2.0 | 1070 | 0.5176 | 0.4977 |
| 0.2401 | 3.0 | 1605 | 0.5498 | 0.5354 |
| 0.1751 | 4.0 | 2140 | 0.7975 | 0.5270 |
| 0.1229 | 5.0 | 2675 | 0.8348 | 0.5443 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
cd8d60e94db8d9b83b766c60d38c0e79
|
Helsinki-NLP/opus-mt-bzs-es
|
Helsinki-NLP
|
marian
| 10 | 9 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-bzs-es
* source languages: bzs
* target languages: es
* OPUS readme: [bzs-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bzs.es | 28.1 | 0.464 |
|
262c88179aa6ecd7cd7f872351e2fa60
|
joeddav/distilbert-base-uncased-go-emotions-student
|
joeddav
|
distilbert
| 8 | 294,122 |
transformers
| 20 |
text-classification
| true | true | false |
mit
|
['en']
|
['go_emotions']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classification', 'pytorch', 'tensorflow']
| false | true | true | 903 | false |
# distilbert-base-uncased-go-emotions-student
## Model Description
This model is distilled from the zero-shot classification pipeline on the unlabeled GoEmotions dataset using [this
script](https://github.com/huggingface/transformers/tree/master/examples/research_projects/zero-shot-distillation).
It was trained with mixed precision for 10 epochs and otherwise used the default script arguments.
## Intended Usage
The model can be used like any other model trained on GoEmotions, but will likely not perform as well as a model
trained with full supervision. It is primarily intended as a demo of how an expensive NLI-based zero-shot model
can be distilled to a more efficient student, allowing a classifier to be trained with only unlabeled data. Note
that although the GoEmotions dataset allow multiple labels per instance, the teacher used single-label
classification to create psuedo-labels.
|
0147b379c458d39b091005e5d74b5323
|
gokuls/mobilebert_sa_GLUE_Experiment_qnli_256
|
gokuls
|
mobilebert
| 17 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,645 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_qnli_256
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6510
- Accuracy: 0.6083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6764 | 1.0 | 819 | 0.6516 | 0.6112 |
| 0.6368 | 2.0 | 1638 | 0.6510 | 0.6083 |
| 0.6131 | 3.0 | 2457 | 0.6546 | 0.6158 |
| 0.5957 | 4.0 | 3276 | 0.6592 | 0.6101 |
| 0.5825 | 5.0 | 4095 | 0.6751 | 0.5993 |
| 0.5719 | 6.0 | 4914 | 0.6890 | 0.5993 |
| 0.5618 | 7.0 | 5733 | 0.7025 | 0.5907 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
343cc37c796a4f92eba7d81092724112
|
jmassot/bert-base-uncased-issues-128
|
jmassot
|
bert
| 10 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,919 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1019 | 1.0 | 291 | 1.7019 |
| 1.6412 | 2.0 | 582 | 1.4273 |
| 1.4844 | 3.0 | 873 | 1.3947 |
| 1.4006 | 4.0 | 1164 | 1.3698 |
| 1.3382 | 5.0 | 1455 | 1.1941 |
| 1.2822 | 6.0 | 1746 | 1.2781 |
| 1.2393 | 7.0 | 2037 | 1.2650 |
| 1.2009 | 8.0 | 2328 | 1.2082 |
| 1.1657 | 9.0 | 2619 | 1.1776 |
| 1.1394 | 10.0 | 2910 | 1.2050 |
| 1.1276 | 11.0 | 3201 | 1.2067 |
| 1.1051 | 12.0 | 3492 | 1.1630 |
| 1.0814 | 13.0 | 3783 | 1.2529 |
| 1.0757 | 14.0 | 4074 | 1.1699 |
| 1.063 | 15.0 | 4365 | 1.1113 |
| 1.0637 | 16.0 | 4656 | 1.2512 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.1
|
04bf077318dbcd59ae025138bbadcb5b
|
benjamin/roberta-base-wechsel-french
|
benjamin
|
roberta
| 21 | 13 |
transformers
| 1 |
fill-mask
| true | false | false |
mit
|
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,652 | false |
# roberta-base-wechsel-french
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
3979a09736ffacad494aab242b842571
|
irenepap/t5-base-asqa-ob
|
irenepap
|
t5
| 10 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,151 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-asqa-ob
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the [ASQA](https://huggingface.co/datasets/din0s/asqa) dataset with context (open book).
It achieves the following results on the evaluation set:
- Loss: 1.7256
- Rougelsum: 13.6463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|
| 2.5404 | 1.0 | 710 | 1.8160 | 13.0967 |
| 2.0048 | 2.0 | 1420 | 1.7752 | 13.2823 |
| 1.9116 | 3.0 | 2130 | 1.7574 | 13.3068 |
| 1.8722 | 4.0 | 2840 | 1.7469 | 13.3896 |
| 1.8298 | 5.0 | 3550 | 1.7395 | 13.4231 |
| 1.8397 | 6.0 | 4260 | 1.7347 | 13.5553 |
| 1.7575 | 7.0 | 4970 | 1.7303 | 13.5613 |
| 1.7433 | 8.0 | 5680 | 1.7266 | 13.5253 |
| 1.7502 | 9.0 | 6390 | 1.7254 | 13.5391 |
| 1.731 | 10.0 | 7100 | 1.7233 | 13.4958 |
| 1.6788 | 11.0 | 7810 | 1.7250 | 13.5977 |
| 1.6793 | 12.0 | 8520 | 1.7243 | 13.5956 |
| 1.6531 | 13.0 | 9230 | 1.7255 | 13.6186 |
| 1.683 | 14.0 | 9940 | 1.7259 | 13.6567 |
| 1.6348 | 15.0 | 10650 | 1.7256 | 13.6463 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
56a446b84e6f5401bc4a5b3686bb140d
|
lmqg/mt5-small-frquad-qg-ae
|
lmqg
|
mt5
| 40 | 202 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
|
['fr']
|
['lmqg/qg_frquad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question generation', 'answer extraction']
| true | true | true | 7,578 | false |
# Model Card of `lmqg/mt5-small-frquad-qg-ae`
This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question generation and answer extraction jointly on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small)
- **Language:** fr
- **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="fr", model="lmqg/mt5-small-frquad-qg-ae")
# model prediction
question_answer_pairs = model.generate_qa("Créateur » (Maker), lui aussi au singulier, « le Suprême Berger » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mt5-small-frquad-qg-ae")
# answer extraction
answer = pipe("generate question: Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême Berger <hl> » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.")
# question generation
question = pipe("extract answers: Pourtant, la strophe spensérienne, utilisée cinq fois avant que ne commence le chœur, constitue en soi un vecteur dont les répétitions structurelles, selon Ricks, relèvent du pur lyrisme tout en constituant une menace potentielle. Après les huit sages pentamètres iambiques, l'alexandrin final <hl> permet une pause <hl>, « véritable illusion d'optique » qu'accentuent les nombreuses expressions archaïsantes telles que did swoon, did seem, did go, did receive, did make, qui doublent le prétérit en un temps composé et paraissent à la fois « très précautionneuses et très peu pressées ».")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-frquad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_frquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 79.9 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_1 | 27.6 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_2 | 16.31 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_3 | 11 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_4 | 7.75 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| METEOR | 17.62 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| MoverScore | 56.44 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| ROUGE_L | 28.06 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-frquad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_frquad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 79.7 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| QAAlignedF1Score (MoverScore) | 54.22 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| QAAlignedPrecision (BERTScore) | 77.29 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| QAAlignedPrecision (MoverScore) | 52.84 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| QAAlignedRecall (BERTScore) | 82.36 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| QAAlignedRecall (MoverScore) | 55.76 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
- ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-frquad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_frquad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
| AnswerExactMatch | 46.96 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| AnswerF1Score | 67.44 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| BERTScore | 87.84 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_1 | 40.67 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_2 | 35.92 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_3 | 32.1 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_4 | 28.71 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| METEOR | 37.9 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| MoverScore | 76.45 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| ROUGE_L | 43.93 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_frquad
- dataset_name: default
- input_types: ['paragraph_answer', 'paragraph_sentence']
- output_types: ['question', 'answer']
- prefix_types: ['qg', 'ae']
- model: google/mt5-small
- max_length: 512
- max_length_output: 32
- epoch: 18
- batch: 64
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 1
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-frquad-qg-ae/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
23a94029f022bb49e0fee1d968e99b30
|
IDEA-CCNL/Erlangshen-TCBert-330M-Sentence-Embedding-Chinese
|
IDEA-CCNL
|
bert
| 5 | 38 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
|
['zh']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['classification']
| false | true | true | 8,753 | false |
# IDEA-CCNL/Erlangshen-TCBert-330M-Sentence-Embedding-Chinese
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
330M参数的句子表征Topic Classification BERT (TCBert)。
The TCBert with 330M parameters is pre-trained for sentence representation for Chinese topic classification tasks.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 句子表征 | 二郎神 Erlangshen | TCBert (sentence representation) | 330M | Chinese |
## 模型信息 Model Information
为了提高模型在话题分类上句子表征效果,我们收集了大量话题分类数据进行基于prompts的对比学习预训练。
To improve the model performance on sentence representation for the topic classification task, we collected numerous topic classification datasets for contrastive pre-training based on general prompts.
### 下游效果 Performance
我们为每个数据集设计了两个prompt模板。
We customize two prompts templates for each dataset.
第一个prompt模板:
For ***prompt template 1***:
| Dataset | Prompt template 1 |
|---------|:------------------------:|
| TNEWS | 下面是一则关于__的新闻: |
| CSLDCP | 这一句描述__的内容如下: |
| IFLYTEK | 这一句描述__的内容如下: |
第一个prompt模板的微调实验结果:
The **fine-tuning** results for prompt template 1:
| Model | TNEWS | CLSDCP | IFLYTEK |
|-----------------|:------:|:------:|:-------:|
| Macbert-base | 55.02 | 57.37 | 51.34 |
| Macbert-large | 55.77 | 58.99 | 50.31 |
| Erlangshen-1.3B | 57.36 | 62.35 | 53.23 |
| TCBert-base<sub>110M-Classification-Chinese | 55.57 | 58.60 | 49.63 |
| TCBert-large<sub>330M-Classification-Chinese | 56.17 | 60.06 | 51.34 |
| TCBert-1.3B<sub>1.3B-Classification-Chinese | 57.41 | 65.10 | 53.75 |
| TCBert-base<sub>110M-Sentence-Embedding-Chinese | 54.68 | 59.78 | 49.40 |
| TCBert-large<sub>330M-Sentence-Embedding-Chinese | 55.32 | 62.07 | 51.11 |
| TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 57.46 | 65.04 | 53.06 |
第一个prompt模板的句子相似度结果:
The **sentence similarity** results for prompt template 1:
| | TNEWS | | CSLDCP | | IFLYTEK | |
|-----------------|:--------:|:---------:|:---------:|:---------:|:---------:|:---------:|
| Model | referece | whitening | reference | whitening | reference | whitening |
| Macbert-base | 43.53 | 47.16 | 33.50 | 36.53 | 28.99 | 33.85 |
| Macbert-large | 46.17 | 49.35 | 37.65 | 39.38 | 32.36 | 35.33 |
| Erlangshen-1.3B | 45.72 | 49.60 | 40.56 | 44.26 | 29.33 | 36.48 |
| TCBert-base<sub>110M-Classification-Chinese | 48.61 | 51.99 | 43.31 | 45.15 | 33.45 | 37.28 |
| TCBert-large<sub>330M-Classification-Chinese | 50.50 | 52.79 | 52.89 | 53.89 | 34.93 | 38.31 |
| TCBert-1.3B<sub>1.3B-Classification-Chinese | 50.80 | 51.59 | 51.93 | 54.12 | 33.96 | 38.08 |
| TCBert-base<sub>110M-Sentence-Embedding-Chinese | 45.82 | 47.06 | 42.91 | 43.87 | 33.28 | 34.76 |
| TCBert-large<sub>330M-Sentence-Embedding-Chinese | 50.10 | 50.90 | 53.78 | 53.33 | 37.62 | 36.94 |
| TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 50.70 | 53.48 | 52.66 | 54.40 | 36.88 | 38.48 |
第二个prompt模板:
For ***prompt template 2***:
| Dataset | Prompt template 2 |
|---------|:------------------------:|
| TNEWS | 接下来的新闻,是跟__相关的内容: |
| CSLDCP | 接下来的学科,是跟__相关: |
| IFLYTEK | 接下来的生活内容,是跟__相关: |
第二个prompt模板的微调结果:
The **fine-tuning** results for prompt template 2:
| Model | TNEWS | CLSDCP | IFLYTEK |
|-----------------|:------:|:------:|:-------:|
| Macbert-base | 54.78 | 58.38 | 50.83 |
| Macbert-large | 56.77 | 60.22 | 51.63 |
| Erlangshen-1.3B | 57.81 | 62.80 | 52.77 |
| TCBert-base<sub>110M-Classification-Chinese | 54.58 | 59.16 | 49.80 |
| TCBert-large<sub>330M-Classification-Chinese | 56.22 | 61.23 | 50.77 |
| TCBert-1.3B<sub>1.3B-Classification-Chinese | 57.41 | 64.82 | 53.34 |
| TCBert-base<sub>110M-Sentence-Embedding-Chinese | 54.68 | 59.78 | 49.40 |
| TCBert-large<sub>330M-Sentence-Embedding-Chinese | 55.32 | 62.07 | 51.11 |
| TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 56.87 | 65.83 | 52.94 |
第二个prompt模板的句子相似度结果:
The **sentence similarity** results for prompt template 2:
| | TNEWS | | CSLDCP | | IFLYTEK | |
|-----------------|:--------:|:---------:|:---------:|:---------:|:---------:|:---------:|
| Model | referece | whitening | reference | whitening | reference | whitening |
| Macbert-base | 42.29 | 45.22 | 34.23 | 37.48 | 29.62 | 34.13 |
| Macbert-large | 46.22 | 49.60 | 40.11 | 44.26 | 32.36 | 35.16 |
| Erlangshen-1.3B | 46.17 | 49.10 | 40.45 | 45.88 | 30.36 | 36.88 |
| TCBert-base<sub>110M-Classification-Chinese | 48.31 | 51.34 | 43.42 | 45.27 | 33.10 | 36.19 |
| TCBert-large<sub>330M-Classification-Chinese | 51.19 | 51.69 | 52.55 | 53.28 | 34.31 | 37.45 |
| TCBert-1.3B<sub>1.3B-Classification-Chinese | 52.14 | 52.39 | 51.71 | 53.89 | 33.62 | 38.14 |
| TCBert-base<sub>110M-Sentence-Embedding-Chinese | 46.72 | 48.86 | 43.19 | 43.53 | 34.08 | 35.79 |
| TCBert-large<sub>330M-Sentence-Embedding-Chinese | 50.65 | 51.94 | 53.84 | 53.67 | 37.74 | 36.65 |
| TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 50.75 | 54.78 | 51.43 | 54.34 | 36.48 | 38.36 |
更多关于TCBERTs的细节,请参考我们的技术报告。基于新的数据,我们会更新TCBERTs,请留意我们仓库的更新。
For more details about TCBERTs, please refer to our paper. We may regularly update TCBERTs upon new coming data, please keep an eye on the repo!
## 使用 Usage
### 使用示例 Usage Examples
```python
# Prompt-based MLM fine-tuning
from transformers import BertForMaskedLM, BertTokenizer
import torch
# Loading models
tokenizer=BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-330M-Sentence-Embedding-Chinese")
model=BertForMaskedLM.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-330M-Sentence-Embedding-Chinese")
# Prepare the data
inputs = tokenizer("下面是一则关于[MASK][MASK]的新闻:怎样的房子才算户型方正?", return_tensors="pt")
labels = tokenizer("下面是一则关于房产的新闻:怎样的房子才算户型方正?", return_tensors="pt")["input_ids"]
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
# Output the loss
outputs = model(**inputs, labels=labels)
loss = outputs.loss
```
```python
# Prompt-based Sentence Similarity
# To extract sentence representations.
from transformers import BertForMaskedLM, BertTokenizer
import torch
# Loading models
tokenizer=BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-330M-Sentence-Embedding-Chinese")
model=BertForMaskedLM.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-330M-Sentence-Embedding-Chinese")
# Cosine similarity function
cos = torch.nn.CosineSimilarity(dim=0, eps=1e-8)
with torch.no_grad():
# To extract sentence representations for training data
training_input = tokenizer("怎样的房子才算户型方正?", return_tensors="pt")
training_output = BertForMaskedLM(**token_text, output_hidden_states=True)
training_representation = torch.mean(training_outputs.hidden_states[-1].squeeze(), dim=0)
# To extract sentence representations for training data
test_input = tokenizer("下面是一则关于[MASK][MASK]的新闻:股票放量下趺,大资金出逃谁在接盘?", return_tensors="pt")
test_output = BertForMaskedLM(**token_text, output_hidden_states=True)
test_representation = torch.mean(training_outputs.hidden_states[-1].squeeze(), dim=0)
# Calculate similarity scores
similarity_score = cos(training_input, test_input)
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[技术报告](https://arxiv.org/abs/2211.11304):
If you use for your work, please cite the following paper
```
@article{han2022tcbert,
title={TCBERT: A Technical Report for Chinese Topic Classification BERT},
author={Han, Ting and Pan, Kunhao and Chen, Xinyu and Song, Dingjie and Fan, Yuchen and Gao, Xinyu and Gan, Ruyi and Zhang, Jiaxing},
journal={arXiv preprint arXiv:2211.11304},
year={2022}
}
```
如果您在您的工作中使用了我们的模型,可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
4d99d51fdf01fdfabc3424849eae8d8f
|
espnet/kan-bayashi_ljspeech_tts_train_joint_conformer_fastspeech2_hifigan_raw-truncated-af8fe0
|
espnet
| null | 33 | 37 |
espnet
| 2 |
text-to-speech
| false | false | false |
cc-by-4.0
|
['en']
|
['ljspeech']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'text-to-speech']
| false | true | true | 1,896 | false |
## ESPnet2 TTS pretrained model
### `kan-bayashi/ljspeech_tts_train_joint_conformer_fastspeech2_hifigan_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5498487/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
0e03f190138ba0f2ace526efd9606934
|
Gnanesh5/CRS
|
Gnanesh5
|
xlnet
| 6 | 6 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 900 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CRS
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
09d41b865120444108cd3299a36dbc42
|
jiobiala24/wav2vec2-base-cv-10000
|
jiobiala24
|
wav2vec2
| 11 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,330 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-cv-10000
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-cv](https://huggingface.co/jiobiala24/wav2vec2-base-cv) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3393
- Wer: 0.3684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.4243 | 1.6 | 1000 | 0.7742 | 0.4210 |
| 0.3636 | 3.2 | 2000 | 0.8621 | 0.4229 |
| 0.2638 | 4.8 | 3000 | 0.9328 | 0.4094 |
| 0.2273 | 6.4 | 4000 | 0.9556 | 0.4087 |
| 0.187 | 8.0 | 5000 | 0.9093 | 0.4019 |
| 0.1593 | 9.6 | 6000 | 0.9842 | 0.4029 |
| 0.1362 | 11.2 | 7000 | 1.0651 | 0.4077 |
| 0.1125 | 12.8 | 8000 | 1.0550 | 0.3959 |
| 0.103 | 14.4 | 9000 | 1.1919 | 0.4002 |
| 0.0948 | 16.0 | 10000 | 1.1901 | 0.3983 |
| 0.0791 | 17.6 | 11000 | 1.1091 | 0.3860 |
| 0.0703 | 19.2 | 12000 | 1.2823 | 0.3904 |
| 0.0641 | 20.8 | 13000 | 1.2625 | 0.3817 |
| 0.057 | 22.4 | 14000 | 1.2821 | 0.3776 |
| 0.0546 | 24.0 | 15000 | 1.2975 | 0.3770 |
| 0.0457 | 25.6 | 16000 | 1.2998 | 0.3714 |
| 0.0433 | 27.2 | 17000 | 1.3574 | 0.3721 |
| 0.0423 | 28.8 | 18000 | 1.3393 | 0.3684 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
e6a71a4aef70e4650c954745534e98a5
|
jkhan447/language-detection-Bert-base-uncased-additional
|
jkhan447
|
bert
| 13 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,039 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# language-detection-Bert-base-uncased-additional
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2330
- Accuracy: 0.9497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
c2bf7ab792bdaf0467dcd55036f8b29c
|
timm/efficientformer_l1.snap_dist_in1k
|
timm
| null | 4 | 18 |
timm
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagenet-1k']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'timm']
| false | true | true | 3,518 | false |
# Model card for efficientformer_l1.snap_dist_in1k
A EfficientFormer image classification model. Pretrained with distillation on ImageNet-1k.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 12.3
- GMACs: 1.3
- Activations (M): 5.5
- Image size: 224 x 224
- **Original:** https://github.com/snap-research/EfficientFormer
- **Papers:**
- EfficientFormer: Vision Transformers at MobileNet Speed: https://arxiv.org/abs/2206.01191
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('efficientformer_l1.snap_dist_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'efficientformer_l1.snap_dist_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
|model |top1 |top5 |param_count|img_size|
|-----------------------------------|------|------|-----------|--------|
|efficientformerv2_l.snap_dist_in1k |83.628|96.54 |26.32 |224 |
|efficientformer_l7.snap_dist_in1k |83.368|96.534|82.23 |224 |
|efficientformer_l3.snap_dist_in1k |82.572|96.24 |31.41 |224 |
|efficientformerv2_s2.snap_dist_in1k|82.128|95.902|12.71 |224 |
|efficientformer_l1.snap_dist_in1k |80.496|94.984|12.29 |224 |
|efficientformerv2_s1.snap_dist_in1k|79.698|94.698|6.19 |224 |
|efficientformerv2_s0.snap_dist_in1k|76.026|92.77 |3.6 |224 |
## Citation
```bibtex
@article{li2022efficientformer,
title={EfficientFormer: Vision Transformers at MobileNet Speed},
author={Li, Yanyu and Yuan, Geng and Wen, Yang and Hu, Ju and Evangelidis, Georgios and Tulyakov, Sergey and Wang, Yanzhi and Ren, Jian},
journal={arXiv preprint arXiv:2206.01191},
year={2022}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
|
fbc2b3631ad45fc9ac898ab2efa3c3df
|
Guernika/CoreMLStableDiffusion
|
Guernika
| null | 26 | 0 | null | 16 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 9 | 4 | 5 |
[]
| false | true | true | 7,075 | false |
# Guernika
This repository contains [Guernika](https://apps.apple.com/app/id1660407508) compatible models and instructions to convert existing models.
While these models and instructions were created for [Guernika](https://apps.apple.com/app/id1660407508), they should work and help with any CoreML based solution.
## <a name="converting-models-to-guernika"></a> Converting Models to Guernika
**WARNING:** Xcode is required to convert models:
- Make sure you have [Xcode](https://apps.apple.com/app/id497799835) installed.
- Once installed run the following commands:
```shell
sudo xcode-select --switch /Applications/Xcode.app/Contents/Developer/
sudo xcodebuild -license accept
```
- You should now be ready to start converting models!
### <a name="converting-models-advanced"></a> Easy mode
**Step 1:** Download and install [`Guernika Model Converter`](https://huggingface.co/Guernika/CoreMLStableDiffusion/resolve/main/GuernikaModelConverter.dmg).
[<img alt="Guernika Model Converter icon" src="https://huggingface.co/Guernika/CoreMLStableDiffusion/resolve/main/GuernikaModelConverter_AppIcon.png" width="256pt" />](https://huggingface.co/Guernika/CoreMLStableDiffusion/resolve/main/GuernikaModelConverter.dmg)
**Step 2:** Launch `Guernika Model Converter` from your `Applications` folder, this app may take a few seconds to load.
**Step 3:** Once the app has loaded you will be able to select what model you want to convert:
- You can input the model identifier (e.g. CompVis/stable-diffusion-v1-4) to download from Hugging Face. You may have to log in to or register for your [Hugging Face account](https://huggingface.co), generate a [User Access Token](https://huggingface.co/settings/tokens) and use this token to set up Hugging Face API access by running `huggingface-cli login` in a Terminal window.
- You can select a local model from your machine: `Select local model`
- You can select a local .CKPT model from your machine: `Select CKPT`
<img alt="Guernika Model Converter interface" src="https://huggingface.co/Guernika/CoreMLStableDiffusion/resolve/main/GuernikaModelConverter_screenshot.png" />
**Step 4:** Once you've chosen the model you want to convert you can choose what modules to convert and/or if you want to chunk the UNet module (recommended for iOS/iPadOS devices).
**Step 5:** Once you're happy with your selection click `Convert to Guernika` and wait for the app to complete conversion.
**WARNING:** This command may download several GB worth of PyTorch checkpoints from Hugging Face and may take a long time to complete (15-20 minutes on an M1 machine).
### <a name="converting-models-advanced"></a> Advance mode
**Step 1:** Create a Python environment and install dependencies:
```bash
conda create -n guernika python=3.8 -y
conda activate guernika
cd /path/to/unziped/scripts/location
pip install -e .
```
**Step 2:** Choose what model you want to convert:
**Huggin Face model:** Log in to or register for your [Hugging Face account](https://huggingface.co), generate a [User Access Token](https://huggingface.co/settings/tokens) and use this token to set up Hugging Face API access by running `huggingface-cli login` in a Terminal window.
Once you know what model you want to convert and have accepted its Terms of Use, run the following command replacing `<model-identifier>` with the desired model's identifier:
```shell
python -m python_coreml_stable_diffusion.torch2coreml --model-version <model-identifier> -o <output-directory> --convert-unet --convert-text-encoder --convert-vae-encoder --convert-vae-decoder --convert-safety-checker --bundle-resources-for-guernika --clean-up-mlpackages
```
**Local model:** Run the following command replacing `<model-location>` with the desired model's location path:
```shell
python -m python_coreml_stable_diffusion.torch2coreml --model-location <model-location> -o <output-directory> --convert-unet --convert-text-encoder --convert-vae-encoder --convert-vae-decoder --convert-safety-checker --bundle-resources-for-guernika --clean-up-mlpackages
```
**Local CKPT:** Run the following command replacing `<checkpoint-path>` with the desired CKPT's location path:
```shell
python -m python_coreml_stable_diffusion.torch2coreml --checkpoint-path <checkpoint-path> -o <output-directory> --convert-unet --convert-text-encoder --convert-vae-encoder --convert-vae-decoder --convert-safety-checker --bundle-resources-for-guernika --clean-up-mlpackages
```
**WARNING:** These commands may download several GB worth of PyTorch checkpoints from Hugging Face.
This generally takes 15-20 minutes on an M1 MacBook Pro. Upon successful execution, the neural network models that comprise Stable Diffusion's model will have been converted from PyTorch to Guernika and saved into the specified `<output-directory>`.
#### <a name="converting-models--arguments"></a> Notable arguments
- `--model-version`: The model version defaults to [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4). Developers may specify other versions that are available on [Hugging Face Hub](https://huggingface.co/models?search=stable-diffusion), e.g. [stabilityai/stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) & [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5).
- `--model-location`: The location of a local model defaults to `None`.
- `--checkpoint-path`: The location of a local .CKPT model defaults to `None`.
- `--bundle-resources-for-guernika`: Compiles all 4 models and bundles them along with necessary resources for text tokenization into `<output-mlpackages-directory>/Resources` which should provided as input to the Swift package. This flag is not necessary for the diffusers-based Python pipeline.
- `--clean-up-mlpackages`: Cleans up created .mlpackages leaving only the compiled model.
- `--chunk-unet`: Splits the Unet model in two approximately equal chunks (each with less than 1GB of weights) for mobile-friendly deployment. This is **required** for ANE deployment on iOS and iPadOS. This is not required for macOS. Swift CLI is able to consume both the chunked and regular versions of the Unet model but prioritizes the former. Note that chunked unet is not compatible with the Python pipeline because Python pipeline is intended for macOS only. Chunking is for on-device deployment with Swift only.
- `--attention-implementation`: Defaults to `SPLIT_EINSUM` which is the implementation described in [Deploying Transformers on the Apple Neural Engine](https://machinelearning.apple.com/research/neural-engine-transformers). `--attention-implementation ORIGINAL` will switch to an alternative that should be used for non-ANE deployment. Please refer to the [Performance Benchmark](#performance-benchmark) section for further guidance.
- `--check-output-correctness`: Compares original PyTorch model's outputs to final Core ML model's outputs. This flag increases RAM consumption significantly so it is recommended only for debugging purposes.
|
4108f9148d2f9cf25fd474fc487a4c3d
|
merve/multilabel-v1-replica
|
merve
|
bert
| 13 | 7 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['tr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,605 | false |
**Train-Test Set:** "intent-multilabel-v1-2.zip"
**Model:** "dbmdz/bert-base-turkish-cased"
## Tokenizer Params
```
max_length=128
padding="max_length"
truncation=True
```
## Training Params
```
evaluation_strategy = "epoch"
save_strategy = "epoch"
per_device_train_batch_size = 16
per_device_eval_batch_size = 16
num_train_epochs = 4
load_best_model_at_end = True
```
## Train-Val Splitting Configuration
```
train_test_split(df_train,
test_size=0.1,
random_state=1111)
```
## Training Log
```
Epoch Training Loss Validation Loss
1 No log 0.150276
2 0.195100 0.132906
3 0.107700 0.128633
4 0.107700 0.127795
```
## Threshold Optimization
- **Best Threshold:** 0.1
- **F1 @ Threshold:** 0.734
## Eval Results
```
precision recall f1-score support
Alakasiz 0.90 0.87 0.89 734
Barinma 0.85 0.80 0.83 207
Elektronik 0.73 0.78 0.75 130
Giysi 0.83 0.66 0.73 94
Kurtarma 0.86 0.79 0.82 362
Lojistik 0.73 0.51 0.60 112
Saglik 0.74 0.74 0.74 108
Su 0.64 0.60 0.62 78
Yagma 0.68 0.55 0.61 31
Yemek 0.80 0.83 0.81 117
micro avg 0.84 0.79 0.81 1973
macro avg 0.78 0.71 0.74 1973
weighted avg 0.84 0.79 0.81 1973
samples avg 0.84 0.82 0.82 1973
```
|
018bbd21cd8a5c6e0e4528ed5a13a430
|
SetFit/MiniLM-L12-H384-uncased__sst2__all-train
|
SetFit
|
bert
| 10 | 22 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,540 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLM-L12-H384-uncased__sst2__all-train
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2632
- Accuracy: 0.9055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4183 | 1.0 | 433 | 0.3456 | 0.8720 |
| 0.2714 | 2.0 | 866 | 0.2632 | 0.9055 |
| 0.2016 | 3.0 | 1299 | 0.3357 | 0.8990 |
| 0.1501 | 4.0 | 1732 | 0.4474 | 0.8863 |
| 0.1119 | 5.0 | 2165 | 0.3998 | 0.8979 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ccf4f28311f1f78b7ec2dc03840d4013
|
vishwasgautam/wav2vec2-base-libriSpeech-demo-colab
|
vishwasgautam
|
wav2vec2
| 41 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,345 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-libriSpeech-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4627
- Wer: 0.3174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2349 | 13.51 | 500 | 3.1154 | 1.0 |
| 1.5 | 27.03 | 1000 | 0.4627 | 0.3174 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
53f20d202cfa67d04a40df13becf02bb
|
microsoft/tapex-large-sql-execution
|
microsoft
|
bart
| 8 | 275 |
transformers
| 4 |
table-question-answering
| true | false | false |
mit
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['tapex', 'table-question-answering']
| false | true | true | 3,155 | false |
# TAPEX (large-sized model)
TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
## Model description
TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries.
TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.
## Intended Uses
You can use the raw model for simulating neural SQL execution, i.e., employ TAPEX to execute a SQL query on a given table. However, the model is mostly meant to be fine-tuned on a supervised dataset. Currently TAPEX can be fine-tuned to tackle table question answering tasks and table fact verification tasks. See the [model hub](https://huggingface.co/models?search=tapex) to look for fine-tuned versions on a task that interests you.
### How to Use
Here is how to use this model in transformers:
```python
from transformers import TapexTokenizer, BartForConditionalGeneration
import pandas as pd
tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-large-sql-execution")
model = BartForConditionalGeneration.from_pretrained("microsoft/tapex-large-sql-execution")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
# tapex accepts uncased input since it is pre-trained on the uncased corpus
query = "select year where city = beijing"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# ['2008']
```
### How to Fine-tuning
⚠️ This model checkpoint is **ONLY** used for simulating neural SQL execution (i.e., employ TAPEX to execute a SQL query on a given table), and you **CANNOT** use this model for fine-tuning on downstream tasks. The one that can be used for fine-tuning is at [here](https://huggingface.co/microsoft/tapex-large).
> This separation of two models for two kinds of intention is because of a known issue in BART large, and we recommend readers to see [this comment](https://github.com/huggingface/transformers/issues/15559#issuecomment-1062880564) for more details.
### BibTeX entry and citation info
```bibtex
@inproceedings{
liu2022tapex,
title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},
author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=O50443AsCP}
}
```
|
74ac756cd31bd4018124bf046caa3491
|
jamesesguerra/distilbart-cnn-12-6-finetuned-1.2.1
|
jamesesguerra
|
bart
| 14 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,478 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-finetuned-1.2.1
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9404
- Rouge1: 30.4308
- Rouge2: 13.2594
- Rougel: 25.8203
- Rougelsum: 25.9617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.5124 | 1.0 | 1171 | 2.0753 | 29.493 | 12.3563 | 24.8091 | 24.9317 |
| 1.7628 | 2.0 | 2342 | 1.9404 | 30.4308 | 13.2594 | 25.8203 | 25.9617 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
|
afcf84a96b3dd90d9f02fa86975b878a
|
sd-dreambooth-library/marina
|
sd-dreambooth-library
| null | 22 | 4 |
diffusers
| 0 | null | false | false | false |
mit
| null | null | null | 2 | 2 | 0 | 0 | 1 | 1 | 0 |
[]
| false | true | true | 1,229 | false |
### marina on Stable Diffusion via Dreambooth
#### model by Eddiefloat
This your the Stable Diffusion model fine-tuned the marina concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **marina**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:




|
cd087d04b81d151cf82df6b4c8924f68
|
jonatasgrosman/exp_w2v2t_it_xlsr-53_s237
|
jonatasgrosman
|
wav2vec2
| 10 | 10 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['it']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'it']
| false | true | true | 461 | false |
# exp_w2v2t_it_xlsr-53_s237
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
c26dc8140018e5f11b10133a07e4ee40
|
DrishtiSharma/finetuned-ViT-Indian-Food-Classification-v1
|
DrishtiSharma
|
vit
| 17 | 5 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'generated_from_trainer']
| true | true | true | 2,241 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-ViT-Indian-Food-Classification-v1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Human_Action_Recognition dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2665
- Accuracy: 0.9341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2019 | 0.3 | 100 | 0.9317 | 0.8555 |
| 0.6664 | 0.6 | 200 | 0.5432 | 0.8959 |
| 0.5096 | 0.9 | 300 | 0.4700 | 0.8990 |
| 0.6116 | 1.2 | 400 | 0.4504 | 0.8799 |
| 0.4326 | 1.5 | 500 | 0.3856 | 0.8980 |
| 0.3349 | 1.8 | 600 | 0.3471 | 0.9129 |
| 0.5141 | 2.1 | 700 | 0.3708 | 0.9033 |
| 0.32 | 2.4 | 800 | 0.3338 | 0.9139 |
| 0.2611 | 2.7 | 900 | 0.3159 | 0.9171 |
| 0.1836 | 3.0 | 1000 | 0.2696 | 0.9299 |
| 0.2492 | 3.3 | 1100 | 0.2979 | 0.9214 |
| 0.1846 | 3.6 | 1200 | 0.3165 | 0.9203 |
| 0.1505 | 3.9 | 1300 | 0.2806 | 0.9288 |
| 0.1854 | 4.2 | 1400 | 0.2665 | 0.9341 |
| 0.124 | 4.5 | 1500 | 0.2695 | 0.9341 |
| 0.0719 | 4.8 | 1600 | 0.2668 | 0.9320 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
fac65700dd8fed5d816be3d0b6ef894d
|
nlp4good/psych-search
|
nlp4good
|
bert
| 10 | 127 |
transformers
| 1 |
fill-mask
| true | false | true |
apache-2.0
|
['en']
|
['PubMed']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['mental-health']
| false | true | true | 5,241 | false |
# Psych-Search
Psych-Search is a work in progress to bring cutting edge NLP to mental health practitioners. The model detailed here serves as a foundation for traditional classification models as well as NLU models for a Psych-Search application. The goal of the Psych-Search Application is to use a combination of traditional text classification models to expand the scope of the MESH taxonomy with the inclusion of relevant categories for mental health pracitioners designing suicide prevention programs for adolescent communities within the United States, as well as the automatic extraction and standardization of entities such as risk factors and protective factors.
Our first expansion efforts to the MESH taxonomy include categories:
- Prevention Strategies
- Protective Factors
We are actively looking for partners on this work and would love to hear from you! Please ping us at [email protected].
## Model description
This model is an extension of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased). Continued pretraining was done using SciBERT as the base model using abstract text only from Pyschology and Psychiatry PubMed research. Training was done on approximately 3.5 million papers for 10 epochs and evaluated on a task similar to BioASQ Task A.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModel
mname = "nlp4good/psych-search"
tokenizer = AutoTokenizer.from_pretrained(mname)
model = AutoModel.from_pretrained(mname)
```
### Limitations and bias
This model was trained on all PubMed abstracts categorized under [Psychology and Psychiatry](https://meshb.nlm.nih.gov/treeView). As of March 1, this corresponds to approximately 3.2 million papers that contains abstract text. Of these 3.2 million papers, relevant sparse mental health categories were back translated to increase the representation of certain mental health categories.
There are several limitation with this dataset including large discrepancies in the number of papers associated with [Sexual and Gender Minorities](https://meshb.nlm.nih.gov/record/ui?ui=D000072339). The training data consisted of the following breakdown across gender groups:
Female | Male | Sexual and Gender Minorities
-------|---------|----------
1,896,301 | 1,945,279 | 4,529
Similar discrepancies are present within [Ethnic Groups](https://meshb.nlm.nih.gov/record/ui?ui=D005006) as defined within the MESH taxonomy:
| African Americans | Arabs | Asian Americans | Hispanic Americans | Indians, Central American | Indians, North American | Indians, South American | Indigenous Peoples | Mexican Americans |
|-------------------|-------|-----------------|--------------------|---------------------------|-------------------------|-------------------------|--------------------|-------------------|
| 31,027 | 2,437 | 5,612 | 18,893 | 124 | 5,657 | 633 | 174 | 3,234 |
These discrepancies can have a significant impact on information retrieval systems, downstream machine learning models, and other forms of NLP that leverage these pretrained models.
## Training data
This model was trained on all PubMed abstracts categorized under [Psychology and Psychiatry](https://meshb.nlm.nih.gov/treeView). As of March 1, this corresponds to approximately 3.2 million papers that contains abstract text. Of these 3.2 million papers, relevant sparse categories were back translated from english to french and from french to english to increase the representation of sparser mental health categories. This included backtranslating the following papers with the following categories:
- Depressive Disorder
- Risk Factors
- Mental Disorders
- Child, Preschool
- Mental Health
In aggregate, this process added 557,980 additional papers to our training data.
## Training procedure
Continued pretraining was done on Psychology and Psychiatry PubMed papers for 10 epochs. Default parameters were used with the exception of gradient accumulation steps which was set at 4, with a per device train batch size of 32. 2 x Nvidia 3090's were used in the development of this model.
## Evaluation results
To evaluate the effectiveness of psych-search within the mental health domain, an evaluation task was constructed by finetuning psych-search for a task similar to [BioASQ Task A](http://bioasq.org/). Here we perform large scale biomedical indexing using the MESH taxonomy associated with each paper underneath Psychology and Psychiatry. The evaluation metric is the micro F1 score across all second level descriptors within Psychology and Psychiatry. This corresponds to 38 different MESH categories used during evaluation.
bert-base-uncased | SciBERT Scivocab Uncased | Psych-Search
-------|---------|----------
0.7348 | 0.7394 | 0.7415
## Next Steps
If you are interested in continuing to build on this work or have other ideas on how we can build on others work, please let us know! We can be reached at [email protected]. Our goal is to bring state of the art NLP capabilities to underserved areas of research, with mental health being our top priority.
|
f5cc81f481b321cd2f89fc594cafe839
|
ScandinavianMrT/gpt2_ONION_prefinetune_4.0
|
ScandinavianMrT
|
gpt2
| 13 | 6 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,276 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_ONION_prefinetune_4.0
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 153 | 4.7368 |
| No log | 2.0 | 306 | 4.6732 |
| No log | 3.0 | 459 | 4.6527 |
| 4.8529 | 4.0 | 612 | 4.6484 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
8b92533afd5a3aa433577a3b938a1fc7
|
itsGanni/Catalan_language-clustered
|
itsGanni
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,858 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# itsGanni/Catalan_language-clustered
This model is a fine-tuned version of [nandysoham/13-clustered](https://huggingface.co/nandysoham/13-clustered) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7867
- Train End Logits Accuracy: 0.8125
- Train Start Logits Accuracy: 0.7639
- Validation Loss: 0.4452
- Validation End Logits Accuracy: 0.8182
- Validation Start Logits Accuracy: 0.8182
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.7867 | 0.8125 | 0.7639 | 0.4452 | 0.8182 | 0.8182 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
a8d96b8c8fb100ca55d3b0b8f4dfd0fa
|
Palak/albert-large-v2_squad
|
Palak
|
albert
| 12 | 5 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,016 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-large-v2_squad
This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the **squadV1** dataset.
- "eval_exact_match": 84.80605487228004
- "eval_f1": 91.80638438705844
- "eval_samples": 10808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
0f4960b83f4ee8d14dbeedfad29b81dd
|
Helsinki-NLP/opus-mt-fi-de
|
Helsinki-NLP
|
marian
| 10 | 5,515 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 | false |
### opus-mt-fi-de
* source languages: fi
* target languages: de
* OPUS readme: [fi-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-04.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-de/opus-2019-12-04.zip)
* test set translations: [opus-2019-12-04.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-de/opus-2019-12-04.test.txt)
* test set scores: [opus-2019-12-04.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-de/opus-2019-12-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fi.de | 45.2 | 0.637 |
|
2c82aeadb3b79be7f5d06c4a25e86ad0
|
ubikpt/t5-small-finetuned-cnn-v2
|
ubikpt
|
t5
| 13 | 10 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null |
['cnn_dailymail']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 2,004 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnn-v2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5474
- Rouge1: 35.154
- Rouge2: 18.683
- Rougel: 30.8481
- Rougelsum: 32.9638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.8823 | 1.0 | 35890 | 1.5878 | 34.9676 | 18.4927 | 30.6753 | 32.7702 |
| 1.7871 | 2.0 | 71780 | 1.5709 | 34.9205 | 18.5556 | 30.6514 | 32.745 |
| 1.7507 | 3.0 | 107670 | 1.5586 | 34.9825 | 18.4964 | 30.6724 | 32.7644 |
| 1.7253 | 4.0 | 143560 | 1.5584 | 35.074 | 18.6171 | 30.8007 | 32.9132 |
| 1.705 | 5.0 | 179450 | 1.5528 | 35.023 | 18.5787 | 30.7014 | 32.8396 |
| 1.6894 | 6.0 | 215340 | 1.5518 | 35.0583 | 18.6754 | 30.791 | 32.8814 |
| 1.6776 | 7.0 | 251230 | 1.5468 | 35.2236 | 18.6812 | 30.8944 | 33.0362 |
| 1.6687 | 8.0 | 287120 | 1.5474 | 35.154 | 18.683 | 30.8481 | 32.9638 |
### Framework versions
- Transformers 4.14.0
- Pytorch 1.5.0
- Datasets 2.3.2
- Tokenizers 0.10.3
|
797f20ef0fbe4c4c28614ed9d6244967
|
anmol-chawla/animecharacters-3000
|
anmol-chawla
| null | 15 | 70 |
diffusers
| 1 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 627 | false |
### animecharacters_3000 Dreambooth model trained by anmol-chawla with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
3fcc01f1be2b86e7b645d31fcc19ba71
|
shuidun/test1
|
shuidun
|
gpt2
| 13 | 0 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,016 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
5728c5a477266a1ed70e8f1548c58231
|
sd-concepts-library/toyota-sera
|
sd-concepts-library
| null | 15 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,700 | false |
### Toyota Sera on Stable Diffusion
This is the `<toyota-sera>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:










|
1bf5b9f4bcc8a36a37918104510235e6
|
hammuneer/my_awesome_billsum_model
|
hammuneer
|
t5
| 20 | 0 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,706 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4633
- Rouge1: 0.1168
- Rouge2: 0.0244
- Rougel: 0.0933
- Rougelsum: 0.0933
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 261 | 3.6283 | 0.0812 | 0.0153 | 0.0636 | 0.0637 | 19.0 |
| 4.0281 | 2.0 | 522 | 3.5141 | 0.1064 | 0.0206 | 0.0846 | 0.0845 | 19.0 |
| 4.0281 | 3.0 | 783 | 3.4741 | 0.1154 | 0.0242 | 0.092 | 0.092 | 19.0 |
| 3.7182 | 4.0 | 1044 | 3.4633 | 0.1168 | 0.0244 | 0.0933 | 0.0933 | 19.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ae6cddced485875bbb97b4c6c0d2935a
|
desh2608/icefall-asr-spgispeech-pruned-transducer-stateless2
|
desh2608
| null | 76 | 0 |
k2
| 0 | null | false | false | false |
mit
|
['en']
|
['SPGISpeech']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['k2', 'icefall']
| false | true | true | 1,580 | false |
# SPGISpeech
SPGISpeech consists of 5,000 hours of recorded company earnings calls and their respective
transcriptions. The original calls were split into slices ranging from 5 to 15 seconds in
length to allow easy training for speech recognition systems. Calls represent a broad
cross-section of international business English; SPGISpeech contains approximately 50,000
speakers, one of the largest numbers of any speech corpus, and offers a variety of L1 and
L2 English accents. The format of each WAV file is single channel, 16kHz, 16 bit audio.
Transcription text represents the output of several stages of manual post-processing.
As such, the text contains polished English orthography following a detailed style guide,
including proper casing, punctuation, and denormalized non-standard words such as numbers
and acronyms, making SPGISpeech suited for training fully formatted end-to-end models.
Official reference:
O’Neill, P.K., Lavrukhin, V., Majumdar, S., Noroozi, V., Zhang, Y., Kuchaiev, O., Balam,
J., Dovzhenko, Y., Freyberg, K., Shulman, M.D., Ginsburg, B., Watanabe, S., & Kucsko, G.
(2021). SPGISpeech: 5, 000 hours of transcribed financial audio for fully formatted
end-to-end speech recognition. ArXiv, abs/2104.02014.
ArXiv link: https://arxiv.org/abs/2104.02014
## Performance Record
| Decoding method | val |
|---------------------------|------------|
| greedy search | 2.40 |
| beam search | 2.24 |
| modified beam search | 2.30 |
| fast beam search | 2.35 |
|
b283746ccb3cd56bae069923274a9670
|
IIIT-L/hing-roberta-finetuned-TRAC-DS
|
IIIT-L
|
xlm-roberta
| 9 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,472 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hing-roberta-finetuned-TRAC-DS
This model is a fine-tuned version of [l3cube-pune/hing-roberta](https://huggingface.co/l3cube-pune/hing-roberta) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1610
- Accuracy: 0.7149
- Precision: 0.6921
- Recall: 0.6946
- F1: 0.6932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.8796394086479776e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 43
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.7229 | 2.0 | 1224 | 0.7178 | 0.6928 | 0.6815 | 0.6990 | 0.6780 |
| 0.3258 | 3.99 | 2448 | 1.1610 | 0.7149 | 0.6921 | 0.6946 | 0.6932 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
5a8cd8444190e083ddf279dc07f1f1b1
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_rte_256
|
gokuls
|
distilbert
| 17 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,997 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_rte_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4233
- Accuracy: 0.4729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4233 | 1.0 | 10 | 0.4237 | 0.4729 |
| 0.4174 | 2.0 | 20 | 0.4245 | 0.4729 |
| 0.4184 | 3.0 | 30 | 0.4235 | 0.4729 |
| 0.4174 | 4.0 | 40 | 0.4250 | 0.4729 |
| 0.4174 | 5.0 | 50 | 0.4241 | 0.4729 |
| 0.4169 | 6.0 | 60 | 0.4238 | 0.4729 |
| 0.4164 | 7.0 | 70 | 0.4233 | 0.4729 |
| 0.4151 | 8.0 | 80 | 0.4233 | 0.4729 |
| 0.4109 | 9.0 | 90 | 0.4236 | 0.4729 |
| 0.3894 | 10.0 | 100 | 0.4484 | 0.4477 |
| 0.3551 | 11.0 | 110 | 0.4821 | 0.4585 |
| 0.3256 | 12.0 | 120 | 0.4913 | 0.4477 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
be0d196f34af8d554b0d9c6ba14f9066
|
juliensimon/distilbert-imdb-mlflow
|
juliensimon
|
distilbert
| 190 | 34 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,016 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb-mlflow
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the imdb dataset.
MLflow logs are included. To visualize them, just clone the repo and run :
```
mlflow ui
```
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
5c17e6ce4678345315e6c93a6efb062f
|
Xuandong/HPD-MiniLM-F128
|
Xuandong
|
bert
| 14 | 0 |
transformers
| 0 |
feature-extraction
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,092 | false |
# HPD-MiniLM-F128
This repository contains the pre-trained models for our paper [Compressing Sentence Representation for Semantic Retrieval via Homomorphic Projective Distillation](https://arxiv.org/abs/2203.07687). The sentence embedding model contains only 23M parameters and the model size is only 87MB.
## Overview
We propose **H**omomorphic **P**rojective **D**istillation (HPD) to learn compressed sentence embeddings. Our method augments a small Transformer encoder model with learnable projection layers to produce compact representations while mimicking a large pre-trained language model to retain the sentence representation quality.
## Details
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 128 dimensional dense vector space and can be used for tasks like clustering or semantic search.
The teacher model is [`princeton-nlp/sup-simcse-roberta-large`](https://huggingface.co/princeton-nlp/sup-simcse-bert-base-uncased) and the student model is [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased).
## Usage
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
After installing the package, you can simply load our model
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('Xuandong/HPD-MiniLM-F128')
```
Then you can use our model for **encoding sentences into embeddings**
```python
sentences = ['He plays guitar.', 'A street vendor is outside.']
sentence_embeddings = model.encode(sentences)
for sentence, embedding in zip(sentences, sentence_embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding)
print("")
```
## Evaluation Results
We evaluate our model on semantic textual similarity (STS) tasks. The results are:
| STS12 | STS13 | STS14 | STS15 | STS16 | STS-B | SICK-R | Avg. |
|-------|-------|-------|-------|-------|--------------|-----------------|-------|
| 74.94 | 84.52 | 80.25 | 84.87 | 81.90 | 84.98 | 81.15 | 81.80 |
## Training
Please refer to the github repo (https://github.com/XuandongZhao/HPD) for the details about the training.
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 384, 'out_features': 128, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citation
Please cite our paper if you use HPD in your work:
```bibtex
@article{zhao2022compressing,
title={Compressing Sentence Representation for Semantic Retrieval via Homomorphic Projective Distillation},
author={Zhao, Xuandong and Yu, Zhiguo and Wu, Ming and Li, Lei},
journal={arXiv preprint arXiv:2203.07687},
year={2022}
}
```
|
180e815849356e0912d246d9e3e529d3
|
LawalAfeez/emotion_detection
|
LawalAfeez
|
distilbert
| 8 | 3 |
transformers
| 0 |
text-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 963 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# emotion_detection
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
cfdda25c765df23c4c720360ef823c2a
|
Helsinki-NLP/opus-mt-zne-sv
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-zne-sv
* source languages: zne
* target languages: sv
* OPUS readme: [zne-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/zne-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/zne-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.zne.sv | 25.2 | 0.425 |
|
5c3758342e1c13d06fa999ef5c9f7743
|
auro/whisper-cli-small-or
|
auro
|
whisper
| 31 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['or']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,551 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Odia
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 or dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4245
- Wer: 27.0240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0021 | 49.0 | 1000 | 0.4245 | 27.0240 |
| 0.0001 | 99.0 | 2000 | 0.7338 | 28.1241 |
| 0.0 | 149.0 | 3000 | 0.8594 | 28.6601 |
| 0.0 | 199.0 | 4000 | 0.9103 | 28.3498 |
| 0.0 | 249.0 | 5000 | 0.9329 | 28.2934 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ad8707774ee16223135c1f475020f673
|
theojolliffe/bart-paraphrase-v8-e1-rev
|
theojolliffe
|
bart
| 12 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,460 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrase-v8-e1-rev
This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2811
- Rouge1: 59.7973
- Rouge2: 54.3079
- Rougel: 56.5768
- Rougelsum: 59.4379
- Gen Len: 19.8108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.1123 | 1.0 | 28370 | 0.2811 | 59.7973 | 54.3079 | 56.5768 | 59.4379 | 19.8108 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
f78e2ff1fe5847115f011b570f27f2b0
|
vaibhav9/roberta-qa
|
vaibhav9
|
roberta
| 15 | 3 |
transformers
| 0 |
question-answering
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,262 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-qa
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.7278 | 1.0 | 3787 | 0.6452 |
| 0.4734 | 2.0 | 7574 | 0.6819 |
| 0.3543 | 3.0 | 11361 | 0.7423 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.11.0+cu102
- Datasets 2.8.0
- Tokenizers 0.13.2
|
df65ec4735cabce49ddcee7744dcfb4e
|
digio/Twitter4SSE
|
digio
|
roberta
| 9 | 65 |
transformers
| 1 |
sentence-similarity
| true | false | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pytorch', 'Sentence Transformers', 'Transformers']
| false | true | true | 1,828 | false |
# Twitter4SSE
This model maps texts to 768 dimensional dense embeddings that encode semantic similarity.
It was trained with Multiple Negatives Ranking Loss (MNRL) on a Twitter dataset.
It was initialized from [BERTweet](https://huggingface.co/vinai/bertweet-base) and trained with [Sentence-transformers](https://www.sbert.net/).
## Usage
The model is easier to use with sentence-trainsformers library
```
pip install -U sentence-transformers
```
```
from sentence_transformers import SentenceTransformer
sentences = ["This is the first tweet", "This is the second tweet"]
model = SentenceTransformer('digio/Twitter4SSE')
embeddings = model.encode(sentences)
print(embeddings)
```
Without sentence-transfomer library, please refer to [this repository](https://huggingface.co/sentence-transformers) for detailed instructions on how to use Sentence Transformers on Huggingface.
## Citing & Authors
The official paper [Exploiting Twitter as Source of Large Corpora of Weakly Similar Pairs for Semantic Sentence Embeddings](https://arxiv.org/abs/2110.02030) will be presented at EMNLP 2021. Further details will be available soon.
```
@inproceedings{di-giovanni-brambilla-2021-exploiting,
title = "Exploiting {T}witter as Source of Large Corpora of Weakly Similar Pairs for Semantic Sentence Embeddings",
author = "Di Giovanni, Marco and
Brambilla, Marco",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.780",
pages = "9902--9910",
}
```
The official code is available on [GitHub](https://github.com/marco-digio/Twitter4SSE)
|
596a5d6eb14946219b8e7595ba9bae03
|
roshan151/Model_output
|
roshan151
|
bert
| 8 | 2 |
transformers
| 0 |
fill-mask
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,681 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# roshan151/Model_output
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.9849
- Validation Loss: 2.8623
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -82, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 100, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.1673 | 2.8445 | 0 |
| 2.9770 | 2.8557 | 1 |
| 3.0018 | 2.8612 | 2 |
| 2.9625 | 2.8496 | 3 |
| 2.9849 | 2.8623 | 4 |
### Framework versions
- Transformers 4.21.1
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
efc24a288ce7b6b06cdbd62b58d37125
|
MultiBertGunjanPatrick/multiberts-seed-0-1100k
|
MultiBertGunjanPatrick
|
bert
| 7 | 2 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['en']
|
['bookcorpus', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['exbert', 'multiberts', 'multiberts-seed-0']
| false | true | true | 6,487 | false |
# MultiBERTs Seed 0 Checkpoint 1100k (uncased)
Seed 0 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-1100k')
model = BertModel.from_pretrained("multiberts-seed-0-1100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
6d1ba38ae19957f34dbf8dd32862192d
|
pig4431/Sentiment140_XLNET_5E
|
pig4431
|
xlnet
| 10 | 13 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null |
['sentiment140']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,877 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sentiment140_XLNET_5E
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the sentiment140 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3797
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6687 | 0.08 | 50 | 0.5194 | 0.76 |
| 0.5754 | 0.16 | 100 | 0.4500 | 0.7867 |
| 0.5338 | 0.24 | 150 | 0.3725 | 0.8333 |
| 0.5065 | 0.32 | 200 | 0.4093 | 0.8133 |
| 0.4552 | 0.4 | 250 | 0.3910 | 0.8267 |
| 0.5352 | 0.48 | 300 | 0.3888 | 0.82 |
| 0.415 | 0.56 | 350 | 0.3887 | 0.8267 |
| 0.4716 | 0.64 | 400 | 0.3888 | 0.84 |
| 0.4565 | 0.72 | 450 | 0.3619 | 0.84 |
| 0.4447 | 0.8 | 500 | 0.3758 | 0.8333 |
| 0.4407 | 0.88 | 550 | 0.3664 | 0.8133 |
| 0.46 | 0.96 | 600 | 0.3797 | 0.84 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.3.2
- Tokenizers 0.13.1
|
64c51aaace60e3618b1b01021cf432ca
|
DL82/denlip82
|
DL82
| null | 47 | 1 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 3,178 | false |
### denlip82 Dreambooth model trained by DL82 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
denlip82 (use that on your prompt)

|
51e718f13a035b9134982385ba837a60
|
trnt/twitter_emotions
|
trnt
|
bert
| 12 | 9 |
transformers
| 1 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,405 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter_emotions
This model is a fine-tuned version of [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1647
- Accuracy: 0.9375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2486 | 1.0 | 2000 | 0.2115 | 0.931 |
| 0.135 | 2.0 | 4000 | 0.1725 | 0.936 |
| 0.1041 | 3.0 | 6000 | 0.1647 | 0.9375 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
17e2a1fc47e5f5f440945e67c33ded98
|
sgugger/marian-finetuned-kde4-en-to-fr
|
sgugger
|
marian
| 16 | 5 |
transformers
| 0 |
translation
| true | false | false |
apache-2.0
| null |
['kde4']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation', 'generated_from_trainer']
| true | true | true | 1,104 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8666
- Bleu: 53.2503
- Gen Len: 14.7005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
b9162b00ff26acbd78162e04c9c2816f
|
nielsr/segformer-finetuned-sidewalk-10k-steps
|
nielsr
|
segformer
| 23 | 3 |
transformers
| 1 |
image-segmentation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-segmentation', 'vision', 'generated_from_trainer']
| true | true | true | 185,646 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-finetuned-sidewalk-50-epochs
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6350
- Mean Iou: 0.3022
- Mean Accuracy: 0.3724
- Overall Accuracy: 0.8117
- Accuracy Unlabeled: nan
- Accuracy Flat-road: 0.8240
- Accuracy Flat-sidewalk: 0.8308
- Accuracy Flat-crosswalk: 0.7789
- Accuracy Flat-cyclinglane: 0.9052
- Accuracy Flat-parkingdriveway: 0.3152
- Accuracy Flat-railtrack: nan
- Accuracy Flat-curb: 0.4703
- Accuracy Human-person: 0.6444
- Accuracy Human-rider: 0.0
- Accuracy Vehicle-car: 0.9424
- Accuracy Vehicle-truck: 0.0
- Accuracy Vehicle-bus: 0.0
- Accuracy Vehicle-tramtrain: 0.0
- Accuracy Vehicle-motorcycle: 0.0
- Accuracy Vehicle-bicycle: 0.7116
- Accuracy Vehicle-caravan: 0.0
- Accuracy Vehicle-cartrailer: 0.0
- Accuracy Construction-building: 0.8716
- Accuracy Construction-door: 0.0
- Accuracy Construction-wall: 0.4736
- Accuracy Construction-fenceguardrail: 0.5408
- Accuracy Construction-bridge: 0.0
- Accuracy Construction-tunnel: nan
- Accuracy Construction-stairs: 0.0048
- Accuracy Object-pole: 0.4202
- Accuracy Object-trafficsign: 0.0754
- Accuracy Object-trafficlight: 0.0
- Accuracy Nature-vegetation: 0.9437
- Accuracy Nature-terrain: 0.8196
- Accuracy Sky: 0.9525
- Accuracy Void-ground: 0.0
- Accuracy Void-dynamic: 0.1041
- Accuracy Void-static: 0.2872
- Accuracy Void-unclear: 0.0
- Iou Unlabeled: nan
- Iou Flat-road: 0.7413
- Iou Flat-sidewalk: 0.7520
- Iou Flat-crosswalk: 0.7629
- Iou Flat-cyclinglane: 0.4453
- Iou Flat-parkingdriveway: 0.2976
- Iou Flat-railtrack: nan
- Iou Flat-curb: 0.3701
- Iou Human-person: 0.4953
- Iou Human-rider: 0.0
- Iou Vehicle-car: 0.7962
- Iou Vehicle-truck: 0.0
- Iou Vehicle-bus: 0.0
- Iou Vehicle-tramtrain: 0.0
- Iou Vehicle-motorcycle: 0.0
- Iou Vehicle-bicycle: 0.4152
- Iou Vehicle-caravan: 0.0
- Iou Vehicle-cartrailer: 0.0
- Iou Construction-building: 0.6712
- Iou Construction-door: 0.0
- Iou Construction-wall: 0.3749
- Iou Construction-fenceguardrail: 0.4613
- Iou Construction-bridge: 0.0
- Iou Construction-tunnel: nan
- Iou Construction-stairs: 0.0048
- Iou Object-pole: 0.2337
- Iou Object-trafficsign: 0.0753
- Iou Object-trafficlight: 0.0
- Iou Nature-vegetation: 0.8324
- Iou Nature-terrain: 0.7277
- Iou Sky: 0.9234
- Iou Void-ground: 0.0
- Iou Void-dynamic: 0.0913
- Iou Void-static: 0.1997
- Iou Void-unclear: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Flat-road | Accuracy Flat-sidewalk | Accuracy Flat-crosswalk | Accuracy Flat-cyclinglane | Accuracy Flat-parkingdriveway | Accuracy Flat-railtrack | Accuracy Flat-curb | Accuracy Human-person | Accuracy Human-rider | Accuracy Vehicle-car | Accuracy Vehicle-truck | Accuracy Vehicle-bus | Accuracy Vehicle-tramtrain | Accuracy Vehicle-motorcycle | Accuracy Vehicle-bicycle | Accuracy Vehicle-caravan | Accuracy Vehicle-cartrailer | Accuracy Construction-building | Accuracy Construction-door | Accuracy Construction-wall | Accuracy Construction-fenceguardrail | Accuracy Construction-bridge | Accuracy Construction-tunnel | Accuracy Construction-stairs | Accuracy Object-pole | Accuracy Object-trafficsign | Accuracy Object-trafficlight | Accuracy Nature-vegetation | Accuracy Nature-terrain | Accuracy Sky | Accuracy Void-ground | Accuracy Void-dynamic | Accuracy Void-static | Accuracy Void-unclear | Iou Unlabeled | Iou Flat-road | Iou Flat-sidewalk | Iou Flat-crosswalk | Iou Flat-cyclinglane | Iou Flat-parkingdriveway | Iou Flat-railtrack | Iou Flat-curb | Iou Human-person | Iou Human-rider | Iou Vehicle-car | Iou Vehicle-truck | Iou Vehicle-bus | Iou Vehicle-tramtrain | Iou Vehicle-motorcycle | Iou Vehicle-bicycle | Iou Vehicle-caravan | Iou Vehicle-cartrailer | Iou Construction-building | Iou Construction-door | Iou Construction-wall | Iou Construction-fenceguardrail | Iou Construction-bridge | Iou Construction-tunnel | Iou Construction-stairs | Iou Object-pole | Iou Object-trafficsign | Iou Object-trafficlight | Iou Nature-vegetation | Iou Nature-terrain | Iou Sky | Iou Void-ground | Iou Void-dynamic | Iou Void-static | Iou Void-unclear |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:------------------:|:----------------------:|:-----------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------:|:---------------------:|:--------------------:|:--------------------:|:----------------------:|:--------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------:|:---------------------------:|:------------------------------:|:--------------------------:|:--------------------------:|:------------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:--------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-----------------------:|:------------:|:--------------------:|:---------------------:|:--------------------:|:---------------------:|:-------------:|:-------------:|:-----------------:|:------------------:|:--------------------:|:------------------------:|:------------------:|:-------------:|:----------------:|:---------------:|:---------------:|:-----------------:|:---------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------:|:----------------------:|:-------------------------:|:---------------------:|:---------------------:|:-------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:---------------:|:----------------------:|:-----------------------:|:---------------------:|:------------------:|:-------:|:---------------:|:----------------:|:---------------:|:----------------:|
| 2.4745 | 1.85 | 100 | 1.7861 | 0.1056 | 0.1555 | 0.6397 | nan | 0.2287 | 0.9278 | 0.0 | 0.1406 | 0.0032 | nan | 0.0 | 0.0 | 0.0 | 0.7757 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8764 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8387 | 0.8794 | 0.3057 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.1931 | 0.6432 | 0.0 | 0.1380 | 0.0031 | nan | 0.0 | 0.0 | 0.0 | 0.5312 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4482 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6323 | 0.4860 | 0.3053 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.7294 | 3.7 | 200 | 1.3129 | 0.1517 | 0.1996 | 0.7410 | nan | 0.7928 | 0.8830 | 0.0 | 0.6053 | 0.0089 | nan | 0.0 | 0.0 | 0.0 | 0.7837 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8530 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9138 | 0.7742 | 0.7740 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5519 | 0.7788 | 0.0 | 0.5131 | 0.0088 | nan | 0.0 | 0.0 | 0.0 | 0.5804 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5005 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6747 | 0.5247 | 0.7209 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4479 | 5.56 | 300 | 1.1309 | 0.1608 | 0.2113 | 0.7588 | nan | 0.7973 | 0.9008 | 0.0 | 0.7721 | 0.0269 | nan | 0.0 | 0.0 | 0.0 | 0.8744 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8581 | 0.0 | 0.0007 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8622 | 0.8707 | 0.7985 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5861 | 0.7816 | 0.0 | 0.5877 | 0.0261 | nan | 0.0 | 0.0 | 0.0 | 0.6119 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5582 | 0.0 | 0.0007 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7024 | 0.5206 | 0.7706 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.2348 | 7.41 | 400 | 0.9644 | 0.1707 | 0.2170 | 0.7736 | nan | 0.8125 | 0.9218 | 0.0 | 0.7596 | 0.1081 | nan | 0.0000 | 0.0 | 0.0 | 0.9080 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8280 | 0.0 | 0.0334 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8856 | 0.8260 | 0.8612 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6003 | 0.7937 | 0.0 | 0.6538 | 0.0997 | nan | 0.0000 | 0.0 | 0.0 | 0.6189 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5731 | 0.0 | 0.0330 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7147 | 0.5601 | 0.8139 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.0762 | 9.26 | 500 | 0.8819 | 0.1722 | 0.2159 | 0.7748 | nan | 0.7512 | 0.9353 | 0.0 | 0.7565 | 0.1204 | nan | 0.0016 | 0.0 | 0.0 | 0.9115 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8689 | 0.0 | 0.0565 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9098 | 0.7664 | 0.8303 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5993 | 0.7850 | 0.0 | 0.6536 | 0.1052 | nan | 0.0016 | 0.0 | 0.0 | 0.6377 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5767 | 0.0 | 0.0547 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7285 | 0.5709 | 0.7984 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9933 | 11.11 | 600 | 0.8347 | 0.1814 | 0.2263 | 0.7822 | nan | 0.8064 | 0.9111 | 0.0 | 0.7880 | 0.1443 | nan | 0.0436 | 0.0 | 0.0 | 0.8944 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8970 | 0.0 | 0.1914 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9053 | 0.8080 | 0.8526 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6088 | 0.8045 | 0.0 | 0.6845 | 0.1255 | nan | 0.0419 | 0.0 | 0.0 | 0.6594 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5548 | 0.0 | 0.1585 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7440 | 0.6068 | 0.8176 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9424 | 12.96 | 700 | 0.8428 | 0.1824 | 0.2271 | 0.7704 | nan | 0.6767 | 0.9270 | 0.0475 | 0.7655 | 0.1322 | nan | 0.2020 | 0.0189 | 0.0 | 0.8410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.9205 | 0.0 | 0.2568 | 0.0 | 0.0 | nan | 0.0 | 0.0023 | 0.0 | 0.0 | 0.8994 | 0.7347 | 0.8413 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5838 | 0.7914 | 0.0475 | 0.6091 | 0.1095 | nan | 0.1597 | 0.0185 | 0.0 | 0.6706 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5131 | 0.0 | 0.1872 | 0.0 | 0.0 | nan | 0.0 | 0.0023 | 0.0 | 0.0 | 0.7525 | 0.5837 | 0.8077 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8673 | 14.81 | 800 | 0.7934 | 0.2089 | 0.2509 | 0.7818 | nan | 0.6854 | 0.9394 | 0.7072 | 0.7240 | 0.1504 | nan | 0.2013 | 0.0186 | 0.0 | 0.9071 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.9037 | 0.0 | 0.3110 | 0.0 | 0.0 | nan | 0.0 | 0.0108 | 0.0 | 0.0 | 0.8990 | 0.7171 | 0.8513 | 0.0 | 0.0 | 0.0013 | 0.0 | nan | 0.5914 | 0.7755 | 0.6900 | 0.6673 | 0.1340 | nan | 0.1542 | 0.0183 | 0.0 | 0.6792 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5639 | 0.0 | 0.2172 | 0.0 | 0.0 | nan | 0.0 | 0.0100 | 0.0 | 0.0 | 0.7615 | 0.6014 | 0.8192 | 0.0 | 0.0 | 0.0013 | 0.0 |
| 0.8126 | 16.67 | 900 | 0.7484 | 0.2268 | 0.2784 | 0.7940 | nan | 0.6791 | 0.9397 | 0.7812 | 0.8009 | 0.1532 | nan | 0.3244 | 0.2962 | 0.0 | 0.9018 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8567 | 0.0 | 0.4772 | 0.0002 | 0.0 | nan | 0.0 | 0.0834 | 0.0 | 0.0 | 0.8992 | 0.8280 | 0.8837 | 0.0 | 0.0 | 0.0032 | 0.0 | nan | 0.6303 | 0.7968 | 0.7079 | 0.6095 | 0.1396 | nan | 0.2196 | 0.2638 | 0.0 | 0.7100 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6016 | 0.0 | 0.2860 | 0.0002 | 0.0 | nan | 0.0 | 0.0570 | 0.0 | 0.0 | 0.7678 | 0.6211 | 0.8416 | 0.0 | 0.0 | 0.0032 | 0.0 |
| 0.7989 | 18.52 | 1000 | 0.7241 | 0.2279 | 0.2803 | 0.8018 | nan | 0.7224 | 0.9402 | 0.7875 | 0.8234 | 0.1793 | nan | 0.3763 | 0.1974 | 0.0 | 0.9259 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8911 | 0.0 | 0.3994 | 0.0029 | 0.0 | nan | 0.0 | 0.0758 | 0.0 | 0.0 | 0.8619 | 0.8774 | 0.8854 | 0.0 | 0.0 | 0.0225 | 0.0 | nan | 0.6579 | 0.8292 | 0.7198 | 0.6924 | 0.1660 | nan | 0.2392 | 0.1794 | 0.0 | 0.6748 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5766 | 0.0 | 0.2654 | 0.0029 | 0.0 | nan | 0.0 | 0.0636 | 0.0 | 0.0 | 0.7582 | 0.5994 | 0.8455 | 0.0 | 0.0 | 0.0220 | 0.0 |
| 0.7429 | 20.37 | 1100 | 0.7321 | 0.2276 | 0.2862 | 0.7876 | nan | 0.8321 | 0.8491 | 0.7958 | 0.8572 | 0.2216 | nan | 0.3030 | 0.2864 | 0.0 | 0.9456 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8668 | 0.0 | 0.3757 | 0.0040 | 0.0 | nan | 0.0 | 0.1140 | 0.0 | 0.0 | 0.8839 | 0.8499 | 0.9228 | 0.0 | 0.0 | 0.0505 | 0.0 | nan | 0.6678 | 0.7848 | 0.7342 | 0.5048 | 0.1995 | nan | 0.2316 | 0.2463 | 0.0 | 0.6379 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5916 | 0.0 | 0.2668 | 0.0040 | 0.0 | nan | 0.0 | 0.0820 | 0.0 | 0.0 | 0.7827 | 0.6428 | 0.8583 | 0.0 | 0.0 | 0.0465 | 0.0 |
| 0.7131 | 22.22 | 1200 | 0.7231 | 0.2377 | 0.2995 | 0.7870 | nan | 0.8306 | 0.8458 | 0.7952 | 0.8505 | 0.2218 | nan | 0.3614 | 0.5001 | 0.0 | 0.9504 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7598 | 0.0 | 0.5317 | 0.0405 | 0.0 | nan | 0.0 | 0.1381 | 0.0 | 0.0 | 0.9284 | 0.7938 | 0.9110 | 0.0 | 0.0 | 0.1262 | 0.0 | nan | 0.7038 | 0.7740 | 0.7537 | 0.4538 | 0.1996 | nan | 0.2521 | 0.3853 | 0.0 | 0.6576 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6157 | 0.0 | 0.3046 | 0.0404 | 0.0 | nan | 0.0 | 0.0921 | 0.0 | 0.0 | 0.7846 | 0.6383 | 0.8588 | 0.0 | 0.0 | 0.0911 | 0.0 |
| 0.6919 | 24.07 | 1300 | 0.6775 | 0.2361 | 0.2885 | 0.8013 | nan | 0.7728 | 0.9073 | 0.8010 | 0.8366 | 0.1547 | nan | 0.3070 | 0.3428 | 0.0 | 0.9272 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8568 | 0.0 | 0.5009 | 0.0736 | 0.0 | nan | 0.0 | 0.0975 | 0.0 | 0.0 | 0.9297 | 0.7567 | 0.8978 | 0.0 | 0.0 | 0.0682 | 0.0 | nan | 0.6564 | 0.7929 | 0.6932 | 0.6396 | 0.1438 | nan | 0.2385 | 0.2888 | 0.0 | 0.6807 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6085 | 0.0 | 0.3114 | 0.0729 | 0.0 | nan | 0.0 | 0.0803 | 0.0 | 0.0 | 0.7857 | 0.6403 | 0.8601 | 0.0 | 0.0 | 0.0610 | 0.0 |
| 0.68 | 25.93 | 1400 | 0.6321 | 0.2575 | 0.3109 | 0.8181 | nan | 0.7851 | 0.9362 | 0.8041 | 0.8438 | 0.1694 | nan | 0.3956 | 0.5626 | 0.0 | 0.9306 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8313 | 0.0 | 0.5073 | 0.2728 | 0.0 | nan | 0.0 | 0.1741 | 0.0 | 0.0 | 0.9221 | 0.7899 | 0.9071 | 0.0 | 0.0 | 0.1157 | 0.0 | nan | 0.6781 | 0.8336 | 0.7386 | 0.7047 | 0.1564 | nan | 0.2789 | 0.4291 | 0.0 | 0.6934 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6062 | 0.0 | 0.3305 | 0.2579 | 0.0 | nan | 0.0 | 0.1228 | 0.0 | 0.0 | 0.7952 | 0.6651 | 0.8631 | 0.0 | 0.0 | 0.0865 | 0.0 |
| 0.6644 | 27.78 | 1500 | 0.6568 | 0.2555 | 0.3132 | 0.8074 | nan | 0.7687 | 0.9014 | 0.7631 | 0.8302 | 0.1869 | nan | 0.4841 | 0.4880 | 0.0 | 0.9294 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.8139 | 0.0 | 0.5482 | 0.3042 | 0.0 | nan | 0.0 | 0.1974 | 0.0 | 0.0 | 0.9225 | 0.8543 | 0.9042 | 0.0 | 0.0 | 0.1259 | 0.0 | nan | 0.6723 | 0.8030 | 0.7443 | 0.5873 | 0.1742 | nan | 0.3013 | 0.3813 | 0.0 | 0.7117 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.6159 | 0.0 | 0.3289 | 0.2810 | 0.0 | nan | 0.0 | 0.1295 | 0.0 | 0.0 | 0.8015 | 0.6848 | 0.8665 | 0.0 | 0.0 | 0.0931 | 0.0 |
| 0.6153 | 29.63 | 1600 | 0.6157 | 0.2586 | 0.3131 | 0.8188 | nan | 0.8000 | 0.9242 | 0.7980 | 0.8445 | 0.1758 | nan | 0.4143 | 0.6256 | 0.0 | 0.9155 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.8792 | 0.0 | 0.4465 | 0.2182 | 0.0 | nan | 0.0 | 0.1970 | 0.0 | 0.0 | 0.9111 | 0.8171 | 0.9368 | 0.0 | 0.0 | 0.1136 | 0.0 | nan | 0.6844 | 0.8212 | 0.7565 | 0.6537 | 0.1636 | nan | 0.2857 | 0.4354 | 0.0 | 0.7222 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.6274 | 0.0 | 0.3217 | 0.2147 | 0.0 | nan | 0.0 | 0.1313 | 0.0 | 0.0 | 0.8082 | 0.6809 | 0.8737 | 0.0 | 0.0 | 0.0926 | 0.0 |
| 0.6154 | 31.48 | 1700 | 0.6397 | 0.2621 | 0.3204 | 0.8117 | nan | 0.8357 | 0.8840 | 0.7908 | 0.8465 | 0.2590 | nan | 0.4050 | 0.5401 | 0.0 | 0.9393 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0105 | 0.0 | 0.0 | 0.8169 | 0.0 | 0.4733 | 0.3188 | 0.0 | nan | 0.0 | 0.2505 | 0.0 | 0.0 | 0.9181 | 0.8473 | 0.9287 | 0.0 | 0.0 | 0.1890 | 0.0 | nan | 0.6774 | 0.8042 | 0.7524 | 0.5662 | 0.2300 | nan | 0.2971 | 0.4050 | 0.0 | 0.6970 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0105 | 0.0 | 0.0 | 0.6489 | 0.0 | 0.3454 | 0.3058 | 0.0 | nan | 0.0 | 0.1441 | 0.0 | 0.0 | 0.8074 | 0.6913 | 0.8820 | 0.0 | 0.0 | 0.1224 | 0.0 |
| 0.6305 | 33.33 | 1800 | 0.6131 | 0.2641 | 0.3212 | 0.8194 | nan | 0.8171 | 0.8984 | 0.8212 | 0.8462 | 0.2582 | nan | 0.5051 | 0.5504 | 0.0 | 0.9421 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0221 | 0.0 | 0.0 | 0.8777 | 0.0 | 0.3528 | 0.3169 | 0.0 | nan | 0.0 | 0.2249 | 0.0 | 0.0 | 0.9203 | 0.8499 | 0.9175 | 0.0 | 0.0 | 0.1587 | 0.0 | nan | 0.7209 | 0.8195 | 0.7546 | 0.6166 | 0.2267 | nan | 0.3408 | 0.4000 | 0.0 | 0.6906 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0221 | 0.0 | 0.0 | 0.6055 | 0.0 | 0.2823 | 0.3044 | 0.0 | nan | 0.0 | 0.1545 | 0.0 | 0.0 | 0.8124 | 0.6994 | 0.8799 | 0.0 | 0.0 | 0.1204 | 0.0 |
| 0.6083 | 35.19 | 1900 | 0.6224 | 0.2646 | 0.3182 | 0.8171 | nan | 0.7473 | 0.9297 | 0.7826 | 0.8269 | 0.2162 | nan | 0.4556 | 0.4982 | 0.0 | 0.9169 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0865 | 0.0 | 0.0 | 0.9031 | 0.0 | 0.3618 | 0.3583 | 0.0 | nan | 0.0 | 0.2603 | 0.0 | 0.0 | 0.8966 | 0.8828 | 0.9016 | 0.0 | 0.0 | 0.1587 | 0.0 | nan | 0.6824 | 0.8210 | 0.7645 | 0.5950 | 0.2019 | nan | 0.3166 | 0.3895 | 0.0 | 0.7307 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0853 | 0.0 | 0.0 | 0.6063 | 0.0 | 0.2860 | 0.3200 | 0.0 | nan | 0.0 | 0.1659 | 0.0 | 0.0 | 0.8188 | 0.7017 | 0.8695 | 0.0 | 0.0 | 0.1113 | 0.0 |
| 0.5847 | 37.04 | 2000 | 0.5906 | 0.2713 | 0.3209 | 0.8281 | nan | 0.7374 | 0.9612 | 0.7764 | 0.8195 | 0.2033 | nan | 0.4219 | 0.4950 | 0.0 | 0.9339 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0960 | 0.0 | 0.0 | 0.8434 | 0.0 | 0.4552 | 0.4437 | 0.0 | nan | 0.0 | 0.2250 | 0.0 | 0.0 | 0.9315 | 0.8612 | 0.9071 | 0.0 | 0.0 | 0.1567 | 0.0 | nan | 0.6883 | 0.8311 | 0.7525 | 0.6838 | 0.1851 | nan | 0.3228 | 0.3780 | 0.0 | 0.7236 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0944 | 0.0 | 0.0 | 0.6338 | 0.0 | 0.3408 | 0.3853 | 0.0 | nan | 0.0 | 0.1586 | 0.0 | 0.0 | 0.8104 | 0.6978 | 0.8800 | 0.0 | 0.0 | 0.1162 | 0.0 |
| 0.5764 | 38.89 | 2100 | 0.6088 | 0.2752 | 0.3225 | 0.8255 | nan | 0.7525 | 0.9472 | 0.7709 | 0.8441 | 0.2134 | nan | 0.3932 | 0.5383 | 0.0 | 0.9030 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3470 | 0.0 | 0.0 | 0.9195 | 0.0 | 0.3310 | 0.3215 | 0.0 | nan | 0.0 | 0.2234 | 0.0 | 0.0 | 0.9289 | 0.7964 | 0.9280 | 0.0 | 0.0 | 0.1604 | 0.0 | nan | 0.6993 | 0.8276 | 0.7546 | 0.7234 | 0.1997 | nan | 0.3005 | 0.4222 | 0.0 | 0.7348 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3123 | 0.0 | 0.0 | 0.5918 | 0.0 | 0.2787 | 0.3037 | 0.0 | nan | 0.0 | 0.1585 | 0.0 | 0.0 | 0.8124 | 0.6781 | 0.8844 | 0.0 | 0.0 | 0.1247 | 0.0 |
| 0.5787 | 40.74 | 2200 | 0.5706 | 0.2824 | 0.3351 | 0.8347 | nan | 0.8178 | 0.9369 | 0.8003 | 0.8511 | 0.2352 | nan | 0.4838 | 0.5417 | 0.0 | 0.9025 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3689 | 0.0 | 0.0 | 0.8739 | 0.0 | 0.4493 | 0.4040 | 0.0 | nan | 0.0 | 0.2524 | 0.0 | 0.0 | 0.9422 | 0.8182 | 0.9183 | 0.0 | 0.0 | 0.1276 | 0.0 | nan | 0.7292 | 0.8432 | 0.7669 | 0.6897 | 0.2161 | nan | 0.3484 | 0.4230 | 0.0 | 0.7519 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3045 | 0.0 | 0.0 | 0.6407 | 0.0 | 0.3373 | 0.3491 | 0.0 | nan | 0.0 | 0.1557 | 0.0 | 0.0 | 0.8080 | 0.6803 | 0.8850 | 0.0 | 0.0 | 0.1068 | 0.0 |
| 0.5724 | 42.59 | 2300 | 0.7562 | 0.2740 | 0.3479 | 0.7662 | nan | 0.8734 | 0.7169 | 0.7809 | 0.8847 | 0.2838 | nan | 0.3742 | 0.6758 | 0.0 | 0.9339 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6048 | 0.0 | 0.0 | 0.8535 | 0.0 | 0.4435 | 0.4729 | 0.0 | nan | 0.0 | 0.2817 | 0.0 | 0.0 | 0.9149 | 0.8765 | 0.9329 | 0.0 | 0.0 | 0.2292 | 0.0 | nan | 0.7041 | 0.6683 | 0.7628 | 0.3371 | 0.2575 | nan | 0.2878 | 0.4639 | 0.0 | 0.7454 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4190 | 0.0 | 0.0 | 0.6387 | 0.0 | 0.3357 | 0.3997 | 0.0 | nan | 0.0 | 0.1776 | 0.0 | 0.0 | 0.8183 | 0.7106 | 0.8911 | 0.0 | 0.0 | 0.1516 | 0.0 |
| 0.556 | 44.44 | 2400 | 0.7350 | 0.2665 | 0.3366 | 0.7813 | nan | 0.7897 | 0.7888 | 0.8022 | 0.8878 | 0.2389 | nan | 0.4270 | 0.4859 | 0.0 | 0.9401 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4618 | 0.0 | 0.0 | 0.8866 | 0.0 | 0.3979 | 0.5050 | 0.0 | nan | 0.0 | 0.2580 | 0.0 | 0.0 | 0.9097 | 0.8627 | 0.9337 | 0.0 | 0.0 | 0.1948 | 0.0 | nan | 0.6902 | 0.7286 | 0.7779 | 0.3964 | 0.2231 | nan | 0.3011 | 0.3626 | 0.0 | 0.7078 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3485 | 0.0 | 0.0 | 0.6171 | 0.0 | 0.3044 | 0.3372 | 0.0 | nan | 0.0 | 0.1812 | 0.0 | 0.0 | 0.8195 | 0.7011 | 0.8947 | 0.0 | 0.0 | 0.1378 | 0.0 |
| 0.5599 | 46.3 | 2500 | 0.5949 | 0.2846 | 0.3464 | 0.8215 | nan | 0.7919 | 0.9145 | 0.7935 | 0.8679 | 0.2189 | nan | 0.3795 | 0.5589 | 0.0 | 0.9334 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5627 | 0.0 | 0.0 | 0.8536 | 0.0 | 0.4394 | 0.4730 | 0.0 | nan | 0.0 | 0.3260 | 0.0 | 0.0 | 0.9098 | 0.8344 | 0.9487 | 0.0 | 0.0 | 0.2801 | 0.0 | nan | 0.6901 | 0.8199 | 0.7749 | 0.5729 | 0.2084 | nan | 0.3034 | 0.4321 | 0.0 | 0.7422 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4230 | 0.0 | 0.0 | 0.6491 | 0.0 | 0.3237 | 0.3989 | 0.0 | nan | 0.0 | 0.1963 | 0.0 | 0.0 | 0.8232 | 0.7048 | 0.8949 | 0.0 | 0.0 | 0.1489 | 0.0 |
| 0.5368 | 48.15 | 2600 | 0.6125 | 0.2829 | 0.3502 | 0.8211 | nan | 0.7798 | 0.9034 | 0.7913 | 0.9079 | 0.2587 | nan | 0.3407 | 0.6423 | 0.0 | 0.9351 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6794 | 0.0 | 0.0 | 0.8554 | 0.0 | 0.3996 | 0.4884 | 0.0 | nan | 0.0 | 0.2870 | 0.0 | 0.0 | 0.9271 | 0.8698 | 0.9424 | 0.0 | 0.0 | 0.1992 | 0.0 | nan | 0.6878 | 0.8122 | 0.7578 | 0.5597 | 0.2427 | nan | 0.2680 | 0.4737 | 0.0 | 0.7517 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3649 | 0.0 | 0.0 | 0.6557 | 0.0 | 0.3130 | 0.4117 | 0.0 | nan | 0.0 | 0.1847 | 0.0 | 0.0 | 0.8236 | 0.7137 | 0.8969 | 0.0 | 0.0 | 0.1361 | 0.0 |
| 0.5391 | 50.0 | 2700 | 0.5993 | 0.2877 | 0.3507 | 0.8242 | nan | 0.8174 | 0.8948 | 0.8094 | 0.8896 | 0.2730 | nan | 0.4105 | 0.5570 | 0.0 | 0.9164 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5439 | 0.0 | 0.0 | 0.8772 | 0.0 | 0.5070 | 0.5443 | 0.0 | nan | 0.0 | 0.2691 | 0.0 | 0.0 | 0.9205 | 0.8660 | 0.8975 | 0.0 | 0.0 | 0.2294 | 0.0 | nan | 0.7059 | 0.8214 | 0.7578 | 0.5803 | 0.2537 | nan | 0.2892 | 0.4308 | 0.0 | 0.7548 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4363 | 0.0 | 0.0 | 0.6490 | 0.0 | 0.3579 | 0.4224 | 0.0 | nan | 0.0 | 0.1927 | 0.0 | 0.0 | 0.8239 | 0.7040 | 0.8748 | 0.0 | 0.0 | 0.1516 | 0.0 |
| 0.5041 | 51.85 | 2800 | 0.5912 | 0.2859 | 0.3493 | 0.8264 | nan | 0.7593 | 0.9248 | 0.8029 | 0.8780 | 0.2945 | nan | 0.3718 | 0.6308 | 0.0 | 0.9078 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6667 | 0.0 | 0.0 | 0.8945 | 0.0 | 0.3362 | 0.4834 | 0.0 | nan | 0.0 | 0.3167 | 0.0 | 0.0 | 0.9255 | 0.8641 | 0.9382 | 0.0 | 0.0 | 0.1836 | 0.0 | nan | 0.6993 | 0.8205 | 0.7232 | 0.5789 | 0.2712 | nan | 0.2852 | 0.4872 | 0.0 | 0.7747 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3825 | 0.0 | 0.0 | 0.6382 | 0.0 | 0.2862 | 0.4138 | 0.0 | nan | 0.0 | 0.2019 | 0.0 | 0.0 | 0.8284 | 0.7271 | 0.8984 | 0.0 | 0.0 | 0.1316 | 0.0 |
| 0.5007 | 53.7 | 2900 | 0.6220 | 0.2839 | 0.3577 | 0.8134 | nan | 0.7302 | 0.8903 | 0.8180 | 0.9098 | 0.3134 | nan | 0.3521 | 0.6870 | 0.0 | 0.9429 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7288 | 0.0 | 0.0 | 0.8340 | 0.0 | 0.5169 | 0.4700 | 0.0 | nan | 0.0 | 0.3105 | 0.0 | 0.0 | 0.9356 | 0.8318 | 0.9437 | 0.0 | 0.0003 | 0.2298 | 0.0 | nan | 0.6722 | 0.8034 | 0.7257 | 0.4922 | 0.2900 | nan | 0.2639 | 0.4741 | 0.0 | 0.7434 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4082 | 0.0 | 0.0 | 0.6635 | 0.0 | 0.3690 | 0.4172 | 0.0 | nan | 0.0 | 0.1981 | 0.0 | 0.0 | 0.8205 | 0.6936 | 0.9015 | 0.0 | 0.0003 | 0.1483 | 0.0 |
| 0.4992 | 55.56 | 3000 | 0.5669 | 0.2928 | 0.3647 | 0.8317 | nan | 0.7826 | 0.9171 | 0.8018 | 0.9165 | 0.2758 | nan | 0.5273 | 0.6986 | 0.0 | 0.9410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6836 | 0.0 | 0.0 | 0.8296 | 0.0 | 0.4717 | 0.4595 | 0.0 | nan | 0.0 | 0.3613 | 0.0 | 0.0 | 0.9272 | 0.8671 | 0.9424 | 0.0 | 0.0017 | 0.2669 | 0.0 | nan | 0.7196 | 0.8377 | 0.7464 | 0.6016 | 0.2573 | nan | 0.3367 | 0.4767 | 0.0 | 0.7565 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4237 | 0.0 | 0.0 | 0.6653 | 0.0 | 0.3438 | 0.4034 | 0.0 | nan | 0.0 | 0.1974 | 0.0 | 0.0 | 0.8287 | 0.7120 | 0.9031 | 0.0 | 0.0017 | 0.1565 | 0.0 |
| 0.5151 | 57.41 | 3100 | 0.6131 | 0.2864 | 0.3598 | 0.8169 | nan | 0.7793 | 0.9005 | 0.7894 | 0.8762 | 0.2508 | nan | 0.3852 | 0.6197 | 0.0 | 0.9316 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6506 | 0.0 | 0.0 | 0.7819 | 0.0 | 0.5348 | 0.5782 | 0.0 | nan | 0.0 | 0.3853 | 0.0 | 0.0 | 0.9211 | 0.8624 | 0.9390 | 0.0 | 0.0 | 0.3278 | 0.0 | nan | 0.6967 | 0.8145 | 0.7436 | 0.5453 | 0.2362 | nan | 0.2992 | 0.4656 | 0.0 | 0.7549 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4221 | 0.0 | 0.0 | 0.6246 | 0.0 | 0.3873 | 0.3923 | 0.0 | nan | 0.0 | 0.1937 | 0.0 | 0.0 | 0.8257 | 0.7204 | 0.8994 | 0.0 | 0.0 | 0.1417 | 0.0 |
| 0.4688 | 59.26 | 3200 | 0.7342 | 0.2674 | 0.3425 | 0.7758 | nan | 0.6724 | 0.8138 | 0.8211 | 0.8881 | 0.2106 | nan | 0.3435 | 0.4240 | 0.0 | 0.9345 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6881 | 0.0 | 0.0 | 0.8684 | 0.0 | 0.4808 | 0.5494 | 0.0 | nan | 0.0 | 0.2968 | 0.0 | 0.0 | 0.9269 | 0.8322 | 0.9291 | 0.0 | 0.0 | 0.2817 | 0.0 | nan | 0.6227 | 0.7395 | 0.7654 | 0.4008 | 0.1990 | nan | 0.2434 | 0.3473 | 0.0 | 0.7526 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3733 | 0.0 | 0.0 | 0.5567 | 0.0 | 0.3425 | 0.4056 | 0.0 | nan | 0.0 | 0.2033 | 0.0 | 0.0 | 0.8238 | 0.7088 | 0.8978 | 0.0 | 0.0 | 0.1748 | 0.0 |
| 0.4657 | 61.11 | 3300 | 0.7162 | 0.2737 | 0.3487 | 0.7884 | nan | 0.6859 | 0.8395 | 0.7919 | 0.8974 | 0.2306 | nan | 0.4086 | 0.6012 | 0.0 | 0.9212 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7186 | 0.0 | 0.0 | 0.8738 | 0.0 | 0.4323 | 0.5271 | 0.0 | nan | 0.0 | 0.3163 | 0.0 | 0.0 | 0.9373 | 0.8107 | 0.9381 | 0.0 | 0.0 | 0.2280 | 0.0 | nan | 0.6253 | 0.7668 | 0.7584 | 0.4350 | 0.2180 | nan | 0.2835 | 0.4646 | 0.0 | 0.7649 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3505 | 0.0 | 0.0 | 0.5817 | 0.0 | 0.3184 | 0.4275 | 0.0 | nan | 0.0 | 0.1989 | 0.0 | 0.0 | 0.8181 | 0.6916 | 0.9021 | 0.0 | 0.0 | 0.1529 | 0.0 |
| 0.4789 | 62.96 | 3400 | 0.6510 | 0.2824 | 0.3535 | 0.8065 | nan | 0.7245 | 0.8835 | 0.7760 | 0.8886 | 0.2720 | nan | 0.3709 | 0.6675 | 0.0 | 0.9351 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6668 | 0.0 | 0.0 | 0.8450 | 0.0 | 0.4917 | 0.5508 | 0.0 | nan | 0.0 | 0.3585 | 0.0 | 0.0 | 0.9367 | 0.7684 | 0.9321 | 0.0 | 0.0022 | 0.2404 | 0.0 | nan | 0.6754 | 0.7938 | 0.7682 | 0.4856 | 0.2514 | nan | 0.2841 | 0.4779 | 0.0 | 0.7566 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3801 | 0.0 | 0.0 | 0.6118 | 0.0 | 0.3623 | 0.4464 | 0.0 | nan | 0.0 | 0.1990 | 0.0 | 0.0 | 0.8150 | 0.6727 | 0.9029 | 0.0 | 0.0022 | 0.1516 | 0.0 |
| 0.4718 | 64.81 | 3500 | 0.7369 | 0.2741 | 0.3491 | 0.7687 | nan | 0.7886 | 0.7455 | 0.8159 | 0.8865 | 0.2585 | nan | 0.3583 | 0.6014 | 0.0 | 0.9362 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6741 | 0.0 | 0.0 | 0.8728 | 0.0 | 0.4488 | 0.5138 | 0.0 | nan | 0.0 | 0.3533 | 0.0 | 0.0 | 0.9343 | 0.8363 | 0.9345 | 0.0 | 0.0002 | 0.2111 | 0.0 | nan | 0.6800 | 0.6730 | 0.7173 | 0.3412 | 0.2406 | nan | 0.2736 | 0.4651 | 0.0 | 0.7688 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3688 | 0.0 | 0.0 | 0.6494 | 0.0 | 0.3507 | 0.4403 | 0.0 | nan | 0.0 | 0.1950 | 0.0 | 0.0 | 0.8287 | 0.7216 | 0.9039 | 0.0 | 0.0002 | 0.1536 | 0.0 |
| 0.4586 | 66.67 | 3600 | 0.7463 | 0.2799 | 0.3515 | 0.7620 | nan | 0.8497 | 0.6965 | 0.7931 | 0.9041 | 0.2737 | nan | 0.3983 | 0.5616 | 0.0 | 0.9365 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5892 | 0.0 | 0.0 | 0.8439 | 0.0 | 0.5213 | 0.4720 | 0.0 | nan | 0.0 | 0.3429 | 0.0 | 0.0 | 0.9332 | 0.8690 | 0.9431 | 0.0 | 0.0 | 0.3213 | 0.0 | nan | 0.7435 | 0.6450 | 0.7808 | 0.3120 | 0.2517 | nan | 0.3134 | 0.4378 | 0.0 | 0.7305 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4349 | 0.0 | 0.0 | 0.6399 | 0.0 | 0.3813 | 0.4243 | 0.0 | nan | 0.0 | 0.2097 | 0.0 | 0.0 | 0.8287 | 0.7225 | 0.9085 | 0.0 | 0.0 | 0.1926 | 0.0 |
| 0.4506 | 68.52 | 3700 | 0.6409 | 0.2859 | 0.3587 | 0.8030 | nan | 0.7887 | 0.8394 | 0.8054 | 0.8912 | 0.2518 | nan | 0.3799 | 0.6292 | 0.0 | 0.9273 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7090 | 0.0 | 0.0 | 0.8655 | 0.0 | 0.4989 | 0.5447 | 0.0 | nan | 0.0 | 0.3519 | 0.0 | 0.0 | 0.9335 | 0.8362 | 0.9278 | 0.0 | 0.0 | 0.2975 | 0.0 | nan | 0.7248 | 0.7574 | 0.7649 | 0.4118 | 0.2326 | nan | 0.2996 | 0.4840 | 0.0 | 0.7856 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3424 | 0.0 | 0.0 | 0.6639 | 0.0 | 0.3766 | 0.4576 | 0.0 | nan | 0.0 | 0.2055 | 0.0 | 0.0 | 0.8284 | 0.7274 | 0.9032 | 0.0 | 0.0 | 0.1823 | 0.0 |
| 0.4659 | 70.37 | 3800 | 0.6466 | 0.2884 | 0.3577 | 0.8081 | nan | 0.8256 | 0.8420 | 0.7982 | 0.8692 | 0.3484 | nan | 0.4035 | 0.4964 | 0.0 | 0.9489 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6461 | 0.0 | 0.0 | 0.8281 | 0.0 | 0.5593 | 0.5404 | 0.0 | nan | 0.0 | 0.3533 | 0.0 | 0.0 | 0.9345 | 0.7861 | 0.9426 | 0.0 | 0.0 | 0.3225 | 0.0 | nan | 0.7403 | 0.7665 | 0.7649 | 0.4456 | 0.2991 | nan | 0.3198 | 0.3976 | 0.0 | 0.7512 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4217 | 0.0 | 0.0 | 0.6537 | 0.0 | 0.3859 | 0.4470 | 0.0 | nan | 0.0 | 0.2219 | 0.0 | 0.0 | 0.8223 | 0.6908 | 0.9109 | 0.0 | 0.0 | 0.1898 | 0.0 |
| 0.4416 | 72.22 | 3900 | 0.6944 | 0.2824 | 0.3648 | 0.7953 | nan | 0.8073 | 0.8044 | 0.8200 | 0.9039 | 0.2713 | nan | 0.4385 | 0.6632 | 0.0 | 0.9435 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7130 | 0.0 | 0.0 | 0.8448 | 0.0 | 0.5050 | 0.5552 | 0.0 | nan | 0.0 | 0.3791 | 0.0 | 0.0 | 0.9316 | 0.8332 | 0.9378 | 0.0 | 0.0047 | 0.3183 | 0.0 | nan | 0.7045 | 0.7445 | 0.6571 | 0.4107 | 0.2536 | nan | 0.3089 | 0.4711 | 0.0 | 0.7504 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3814 | 0.0 | 0.0 | 0.6468 | 0.0 | 0.3800 | 0.4413 | 0.0 | nan | 0.0 | 0.2243 | 0.0 | 0.0 | 0.8294 | 0.7257 | 0.9078 | 0.0 | 0.0047 | 0.1964 | 0.0 |
| 0.4347 | 74.07 | 4000 | 0.5742 | 0.2960 | 0.3615 | 0.8319 | nan | 0.8135 | 0.9088 | 0.8067 | 0.8959 | 0.3006 | nan | 0.3611 | 0.6055 | 0.0 | 0.9354 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6851 | 0.0 | 0.0 | 0.8692 | 0.0 | 0.4956 | 0.5065 | 0.0 | nan | 0.0 | 0.3493 | 0.0 | 0.0 | 0.9264 | 0.8500 | 0.9368 | 0.0 | 0.0018 | 0.3210 | 0.0 | nan | 0.7436 | 0.8254 | 0.7615 | 0.5609 | 0.2797 | nan | 0.3045 | 0.4733 | 0.0 | 0.7745 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4006 | 0.0 | 0.0 | 0.6424 | 0.0 | 0.3800 | 0.4600 | 0.0 | nan | 0.0 | 0.2126 | 0.0 | 0.0 | 0.8296 | 0.7251 | 0.9085 | 0.0 | 0.0018 | 0.1876 | 0.0 |
| 0.4191 | 75.93 | 4100 | 0.6454 | 0.2879 | 0.3671 | 0.8068 | nan | 0.7757 | 0.8432 | 0.8171 | 0.8803 | 0.3169 | nan | 0.4971 | 0.6474 | 0.0 | 0.9274 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7272 | 0.0 | 0.0 | 0.8520 | 0.0 | 0.4847 | 0.5414 | 0.0 | nan | 0.0 | 0.4113 | 0.0 | 0.0 | 0.9400 | 0.8335 | 0.9348 | 0.0 | 0.0167 | 0.3000 | 0.0 | nan | 0.7112 | 0.7615 | 0.6876 | 0.4533 | 0.2904 | nan | 0.3375 | 0.4768 | 0.0 | 0.7857 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3483 | 0.0 | 0.0 | 0.6544 | 0.0 | 0.3636 | 0.4546 | 0.0 | nan | 0.0 | 0.2086 | 0.0 | 0.0 | 0.8293 | 0.7293 | 0.9093 | 0.0 | 0.0165 | 0.1938 | 0.0 |
| 0.4355 | 77.78 | 4200 | 0.5871 | 0.2915 | 0.3601 | 0.8236 | nan | 0.6673 | 0.9324 | 0.8063 | 0.8730 | 0.2988 | nan | 0.5014 | 0.5734 | 0.0 | 0.9480 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6629 | 0.0 | 0.0 | 0.8653 | 0.0 | 0.4649 | 0.5559 | 0.0 | nan | 0.0 | 0.3890 | 0.0 | 0.0 | 0.9183 | 0.8681 | 0.9537 | 0.0 | 0.0088 | 0.2359 | 0.0 | nan | 0.6266 | 0.8175 | 0.7309 | 0.5730 | 0.2746 | nan | 0.3471 | 0.4465 | 0.0 | 0.7567 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4103 | 0.0 | 0.0 | 0.6684 | 0.0 | 0.3482 | 0.4615 | 0.0 | nan | 0.0 | 0.2062 | 0.0 | 0.0 | 0.8356 | 0.7347 | 0.9131 | 0.0 | 0.0088 | 0.1686 | 0.0 |
| 0.431 | 79.63 | 4300 | 0.5778 | 0.2902 | 0.3540 | 0.8266 | nan | 0.8325 | 0.9042 | 0.7971 | 0.8575 | 0.2707 | nan | 0.4318 | 0.5731 | 0.0 | 0.9428 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6701 | 0.0 | 0.0 | 0.8781 | 0.0 | 0.4081 | 0.5480 | 0.0 | nan | 0.0 | 0.3573 | 0.0 | 0.0 | 0.9299 | 0.7480 | 0.9397 | 0.0 | 0.0343 | 0.2046 | 0.0 | nan | 0.7428 | 0.8112 | 0.7719 | 0.5907 | 0.2545 | nan | 0.3259 | 0.4272 | 0.0 | 0.7505 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4255 | 0.0 | 0.0 | 0.6496 | 0.0 | 0.3209 | 0.4384 | 0.0 | nan | 0.0 | 0.2061 | 0.0 | 0.0 | 0.8142 | 0.6646 | 0.9118 | 0.0 | 0.0338 | 0.1477 | 0.0 |
| 0.4105 | 81.48 | 4400 | 0.7355 | 0.2837 | 0.3547 | 0.7802 | nan | 0.8194 | 0.7548 | 0.8125 | 0.9004 | 0.2421 | nan | 0.4411 | 0.5260 | 0.0 | 0.9344 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6628 | 0.0 | 0.0 | 0.9003 | 0.0 | 0.4114 | 0.5457 | 0.0 | nan | 0.0 | 0.3720 | 0.0 | 0.0 | 0.9386 | 0.8336 | 0.9269 | 0.0 | 0.0905 | 0.2364 | 0.0 | nan | 0.7295 | 0.6964 | 0.7754 | 0.3477 | 0.2325 | nan | 0.3336 | 0.4069 | 0.0 | 0.7641 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4284 | 0.0 | 0.0 | 0.6483 | 0.0 | 0.3512 | 0.4444 | 0.0 | nan | 0.0 | 0.2140 | 0.0 | 0.0 | 0.8260 | 0.7200 | 0.9047 | 0.0 | 0.0883 | 0.1667 | 0.0 |
| 0.4102 | 83.33 | 4500 | 0.6431 | 0.2832 | 0.3550 | 0.8023 | nan | 0.6173 | 0.8926 | 0.8233 | 0.8684 | 0.3015 | nan | 0.4774 | 0.5853 | 0.0 | 0.9435 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7118 | 0.0 | 0.0 | 0.8678 | 0.0 | 0.4544 | 0.5288 | 0.0 | nan | 0.0 | 0.3435 | 0.0 | 0.0 | 0.9438 | 0.7934 | 0.9323 | 0.0 | 0.0264 | 0.2495 | 0.0 | nan | 0.5793 | 0.7784 | 0.7849 | 0.5220 | 0.2750 | nan | 0.3433 | 0.4263 | 0.0 | 0.7478 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3651 | 0.0 | 0.0 | 0.6236 | 0.0 | 0.3489 | 0.4347 | 0.0 | nan | 0.0 | 0.2243 | 0.0 | 0.0 | 0.8184 | 0.6879 | 0.9082 | 0.0 | 0.0258 | 0.1674 | 0.0 |
| 0.4172 | 85.19 | 4600 | 0.6988 | 0.2875 | 0.3537 | 0.7940 | nan | 0.7505 | 0.8194 | 0.8168 | 0.9128 | 0.2640 | nan | 0.4022 | 0.4961 | 0.0 | 0.9391 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6453 | 0.0 | 0.0 | 0.8769 | 0.0 | 0.4600 | 0.5182 | 0.0 | nan | 0.0 | 0.3740 | 0.0 | 0.0 | 0.9378 | 0.8263 | 0.9455 | 0.0 | 0.0900 | 0.2436 | 0.0 | nan | 0.7048 | 0.7401 | 0.7654 | 0.3938 | 0.2454 | nan | 0.2874 | 0.3973 | 0.0 | 0.7572 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4779 | 0.0 | 0.0 | 0.6427 | 0.0 | 0.3531 | 0.4565 | 0.0 | nan | 0.0 | 0.2402 | 0.0 | 0.0 | 0.8333 | 0.7320 | 0.9149 | 0.0 | 0.0880 | 0.1706 | 0.0 |
| 0.3885 | 87.04 | 4700 | 0.5978 | 0.2953 | 0.3647 | 0.8175 | nan | 0.8142 | 0.8718 | 0.8027 | 0.8554 | 0.3059 | nan | 0.3787 | 0.5867 | 0.0 | 0.9403 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6845 | 0.0 | 0.0 | 0.8471 | 0.0 | 0.5315 | 0.5788 | 0.0 | nan | 0.0 | 0.3874 | 0.0 | 0.0 | 0.9354 | 0.8156 | 0.9494 | 0.0 | 0.1221 | 0.2636 | 0.0 | nan | 0.7263 | 0.7825 | 0.7874 | 0.4784 | 0.2859 | nan | 0.2981 | 0.4480 | 0.0 | 0.7604 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3820 | 0.0 | 0.0 | 0.6694 | 0.0 | 0.3781 | 0.4545 | 0.0 | nan | 0.0 | 0.2385 | 0.0 | 0.0 | 0.8301 | 0.7216 | 0.9144 | 0.0 | 0.1131 | 0.1798 | 0.0 |
| 0.3949 | 88.89 | 4800 | 0.5747 | 0.2961 | 0.3643 | 0.8282 | nan | 0.8129 | 0.8976 | 0.8121 | 0.8713 | 0.2894 | nan | 0.4694 | 0.5562 | 0.0 | 0.9391 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6947 | 0.0 | 0.0 | 0.8395 | 0.0 | 0.5260 | 0.5481 | 0.0 | nan | 0.0 | 0.3852 | 0.0 | 0.0 | 0.9428 | 0.8221 | 0.9365 | 0.0 | 0.0559 | 0.2580 | 0.0 | nan | 0.7394 | 0.8130 | 0.7924 | 0.5533 | 0.2658 | nan | 0.3447 | 0.4378 | 0.0 | 0.7620 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3851 | 0.0 | 0.0 | 0.6633 | 0.0 | 0.3722 | 0.4533 | 0.0 | nan | 0.0 | 0.2184 | 0.0 | 0.0 | 0.8217 | 0.7122 | 0.9124 | 0.0 | 0.0534 | 0.1742 | 0.0 |
| 0.4158 | 90.74 | 4900 | 0.6449 | 0.2916 | 0.3657 | 0.8070 | nan | 0.8043 | 0.8271 | 0.8157 | 0.9192 | 0.3073 | nan | 0.4380 | 0.6344 | 0.0 | 0.9340 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7171 | 0.0 | 0.0 | 0.8572 | 0.0 | 0.5188 | 0.5406 | 0.0 | nan | 0.0 | 0.3852 | 0.0 | 0.0 | 0.9420 | 0.8552 | 0.9459 | 0.0 | 0.0450 | 0.2148 | 0.0 | nan | 0.6975 | 0.7564 | 0.7902 | 0.4563 | 0.2853 | nan | 0.3171 | 0.4654 | 0.0 | 0.7879 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3571 | 0.0 | 0.0 | 0.6623 | 0.0 | 0.3819 | 0.4583 | 0.0 | nan | 0.0 | 0.2243 | 0.0 | 0.0 | 0.8302 | 0.7431 | 0.9150 | 0.0 | 0.0421 | 0.1602 | 0.0 |
| 0.3856 | 92.59 | 5000 | 0.7492 | 0.2796 | 0.3559 | 0.7680 | nan | 0.8020 | 0.7250 | 0.8248 | 0.9139 | 0.2500 | nan | 0.3621 | 0.5930 | 0.0 | 0.9411 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6964 | 0.0 | 0.0 | 0.9036 | 0.0 | 0.3460 | 0.5234 | 0.0 | nan | 0.0 | 0.4271 | 0.0 | 0.0 | 0.9255 | 0.8871 | 0.9524 | 0.0 | 0.0666 | 0.2471 | 0.0 | nan | 0.6954 | 0.6697 | 0.7878 | 0.3256 | 0.2365 | nan | 0.2864 | 0.4452 | 0.0 | 0.7724 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3838 | 0.0 | 0.0 | 0.6413 | 0.0 | 0.2968 | 0.4239 | 0.0 | nan | 0.0 | 0.2271 | 0.0 | 0.0 | 0.8382 | 0.7554 | 0.9171 | 0.0 | 0.0624 | 0.1808 | 0.0 |
| 0.3915 | 94.44 | 5100 | 0.6402 | 0.2893 | 0.3608 | 0.8012 | nan | 0.7614 | 0.8406 | 0.7898 | 0.9029 | 0.3080 | nan | 0.3857 | 0.6328 | 0.0 | 0.9373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7010 | 0.0 | 0.0 | 0.8626 | 0.0 | 0.5045 | 0.5235 | 0.0 | nan | 0.0 | 0.3802 | 0.0 | 0.0 | 0.9442 | 0.7561 | 0.9401 | 0.0 | 0.1133 | 0.2603 | 0.0 | nan | 0.6850 | 0.7546 | 0.7750 | 0.4451 | 0.2827 | nan | 0.3049 | 0.4715 | 0.0 | 0.7694 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3810 | 0.0 | 0.0 | 0.6626 | 0.0 | 0.3832 | 0.4394 | 0.0 | nan | 0.0 | 0.2214 | 0.0 | 0.0 | 0.8125 | 0.6725 | 0.9138 | 0.0 | 0.1034 | 0.1797 | 0.0 |
| 0.3732 | 96.3 | 5200 | 0.7308 | 0.2840 | 0.3598 | 0.7795 | nan | 0.7534 | 0.7741 | 0.8137 | 0.9035 | 0.2614 | nan | 0.4308 | 0.6431 | 0.0 | 0.9315 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7293 | 0.0 | 0.0 | 0.8884 | 0.0 | 0.4166 | 0.5225 | 0.0 | nan | 0.0 | 0.3992 | 0.0 | 0.0 | 0.9329 | 0.8517 | 0.9519 | 0.0 | 0.0756 | 0.2354 | 0.0 | nan | 0.6723 | 0.6942 | 0.7836 | 0.3665 | 0.2474 | nan | 0.3333 | 0.4669 | 0.0 | 0.7857 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3545 | 0.0 | 0.0 | 0.6375 | 0.0 | 0.3443 | 0.4311 | 0.0 | nan | 0.0 | 0.2377 | 0.0 | 0.0 | 0.8346 | 0.7428 | 0.9173 | 0.0 | 0.0659 | 0.1722 | 0.0 |
| 0.3843 | 98.15 | 5300 | 0.6580 | 0.2864 | 0.3556 | 0.7962 | nan | 0.7254 | 0.8440 | 0.7996 | 0.8889 | 0.2696 | nan | 0.4320 | 0.6399 | 0.0 | 0.9285 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6708 | 0.0 | 0.0 | 0.8872 | 0.0 | 0.4070 | 0.5262 | 0.0 | nan | 0.0 | 0.3791 | 0.0 | 0.0 | 0.9423 | 0.7462 | 0.9487 | 0.0 | 0.1269 | 0.2159 | 0.0 | nan | 0.6660 | 0.7540 | 0.7836 | 0.4484 | 0.2521 | nan | 0.3307 | 0.4691 | 0.0 | 0.7963 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3896 | 0.0 | 0.0 | 0.6071 | 0.0 | 0.3185 | 0.4568 | 0.0 | nan | 0.0 | 0.2206 | 0.0 | 0.0 | 0.8138 | 0.6608 | 0.9170 | 0.0 | 0.1163 | 0.1644 | 0.0 |
| 0.3903 | 100.0 | 5400 | 0.6288 | 0.2881 | 0.3541 | 0.8086 | nan | 0.7763 | 0.8567 | 0.8240 | 0.8951 | 0.2446 | nan | 0.4334 | 0.5553 | 0.0 | 0.9354 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6738 | 0.0 | 0.0 | 0.8901 | 0.0 | 0.4777 | 0.5458 | 0.0 | nan | 0.0 | 0.3297 | 0.0 | 0.0 | 0.9417 | 0.7702 | 0.9457 | 0.0 | 0.0457 | 0.1907 | 0.0 | nan | 0.6906 | 0.7727 | 0.7923 | 0.4705 | 0.2358 | nan | 0.3295 | 0.4509 | 0.0 | 0.7755 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3981 | 0.0 | 0.0 | 0.6528 | 0.0 | 0.3644 | 0.4573 | 0.0 | nan | 0.0 | 0.2197 | 0.0 | 0.0 | 0.8176 | 0.6797 | 0.9157 | 0.0 | 0.0444 | 0.1500 | 0.0 |
| 0.355 | 101.85 | 5500 | 0.7112 | 0.2860 | 0.3563 | 0.7844 | nan | 0.7834 | 0.7947 | 0.8123 | 0.8807 | 0.2262 | nan | 0.3408 | 0.6020 | 0.0 | 0.9382 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6759 | 0.0 | 0.0 | 0.8838 | 0.0 | 0.4491 | 0.5845 | 0.0 | nan | 0.0 | 0.4029 | 0.0 | 0.0 | 0.9295 | 0.7890 | 0.9477 | 0.0 | 0.1045 | 0.2564 | 0.0 | nan | 0.7086 | 0.7078 | 0.7825 | 0.3607 | 0.2168 | nan | 0.2792 | 0.4624 | 0.0 | 0.7767 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4366 | 0.0 | 0.0 | 0.6667 | 0.0 | 0.3443 | 0.4351 | 0.0 | nan | 0.0 | 0.2386 | 0.0 | 0.0 | 0.8283 | 0.7060 | 0.9167 | 0.0 | 0.1000 | 0.1847 | 0.0 |
| 0.3729 | 103.7 | 5600 | 0.6849 | 0.2835 | 0.3591 | 0.7887 | nan | 0.8150 | 0.7790 | 0.8122 | 0.8834 | 0.2787 | nan | 0.4506 | 0.6270 | 0.0 | 0.9253 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7408 | 0.0 | 0.0 | 0.9180 | 0.0 | 0.3273 | 0.5197 | 0.0 | nan | 0.0 | 0.4167 | 0.0 | 0.0 | 0.9358 | 0.8379 | 0.9406 | 0.0 | 0.0480 | 0.2345 | 0.0 | nan | 0.6989 | 0.7189 | 0.7862 | 0.3939 | 0.2648 | nan | 0.3292 | 0.4851 | 0.0 | 0.7976 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3286 | 0.0 | 0.0 | 0.6202 | 0.0 | 0.2779 | 0.4371 | 0.0 | nan | 0.0 | 0.2402 | 0.0 | 0.0 | 0.8321 | 0.7297 | 0.9140 | 0.0 | 0.0437 | 0.1749 | 0.0 |
| 0.3895 | 105.56 | 5700 | 0.6917 | 0.2909 | 0.3669 | 0.7881 | nan | 0.8520 | 0.7575 | 0.8037 | 0.9006 | 0.2858 | nan | 0.4909 | 0.6331 | 0.0 | 0.9365 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6811 | 0.0 | 0.0 | 0.8525 | 0.0 | 0.5087 | 0.5374 | 0.0 | nan | 0.0 | 0.3766 | 0.0 | 0.0 | 0.9432 | 0.8426 | 0.9479 | 0.0 | 0.0982 | 0.2931 | 0.0 | nan | 0.7338 | 0.7000 | 0.7834 | 0.3764 | 0.2683 | nan | 0.3430 | 0.4719 | 0.0 | 0.7841 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3792 | 0.0 | 0.0 | 0.6627 | 0.0 | 0.3815 | 0.4454 | 0.0 | nan | 0.0 | 0.2245 | 0.0 | 0.0 | 0.8273 | 0.7311 | 0.9183 | 0.0 | 0.0894 | 0.1885 | 0.0 |
| 0.3602 | 107.41 | 5800 | 0.5475 | 0.3042 | 0.3685 | 0.8353 | nan | 0.7641 | 0.9319 | 0.8055 | 0.8737 | 0.3132 | nan | 0.4868 | 0.6244 | 0.0 | 0.9407 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6873 | 0.0 | 0.0 | 0.8810 | 0.0 | 0.4631 | 0.5387 | 0.0 | nan | 0.0 | 0.4382 | 0.0 | 0.0 | 0.9298 | 0.7866 | 0.9486 | 0.0 | 0.1344 | 0.2454 | 0.0 | nan | 0.7121 | 0.8270 | 0.7806 | 0.6491 | 0.2900 | nan | 0.3497 | 0.4700 | 0.0 | 0.7753 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4480 | 0.0 | 0.0 | 0.6577 | 0.0 | 0.3509 | 0.4582 | 0.0 | nan | 0.0 | 0.2281 | 0.0 | 0.0 | 0.8267 | 0.6946 | 0.9179 | 0.0 | 0.1213 | 0.1782 | 0.0 |
| 0.3674 | 109.26 | 5900 | 0.6421 | 0.2919 | 0.3540 | 0.8016 | nan | 0.6932 | 0.8577 | 0.8144 | 0.9018 | 0.3136 | nan | 0.3961 | 0.5655 | 0.0 | 0.9370 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6563 | 0.0 | 0.0 | 0.9140 | 0.0 | 0.3656 | 0.4891 | 0.0 | nan | 0.0 | 0.3775 | 0.0 | 0.0 | 0.9373 | 0.8204 | 0.9427 | 0.0 | 0.1378 | 0.2090 | 0.0 | nan | 0.6366 | 0.7503 | 0.7829 | 0.4541 | 0.2884 | nan | 0.3050 | 0.4442 | 0.0 | 0.7727 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4780 | 0.0 | 0.0 | 0.6644 | 0.0 | 0.3163 | 0.4511 | 0.0 | nan | 0.0 | 0.2316 | 0.0 | 0.0 | 0.8321 | 0.7257 | 0.9157 | 0.0 | 0.1268 | 0.1636 | 0.0 |
| 0.3657 | 111.11 | 6000 | 0.5813 | 0.2955 | 0.3637 | 0.8277 | nan | 0.7870 | 0.8975 | 0.7014 | 0.8566 | 0.3741 | nan | 0.4469 | 0.6219 | 0.0 | 0.9403 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7185 | 0.0 | 0.0 | 0.8827 | 0.0 | 0.4503 | 0.5681 | 0.0 | nan | 0.0 | 0.3815 | 0.0 | 0.0 | 0.9397 | 0.8275 | 0.9484 | 0.0 | 0.0968 | 0.1999 | 0.0 | nan | 0.7203 | 0.8097 | 0.6881 | 0.5693 | 0.3405 | nan | 0.3293 | 0.4754 | 0.0 | 0.7846 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3863 | 0.0 | 0.0 | 0.6346 | 0.0 | 0.3557 | 0.4385 | 0.0 | nan | 0.0 | 0.2181 | 0.0 | 0.0 | 0.8287 | 0.7172 | 0.9189 | 0.0 | 0.0846 | 0.1578 | 0.0 |
| 0.367 | 112.96 | 6100 | 0.6609 | 0.2897 | 0.3661 | 0.7984 | nan | 0.7903 | 0.8284 | 0.8039 | 0.9016 | 0.2212 | nan | 0.4163 | 0.6816 | 0.0 | 0.9453 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7209 | 0.0 | 0.0 | 0.8372 | 0.0 | 0.4577 | 0.5511 | 0.0 | nan | 0.0 | 0.4283 | 0.0 | 0.0 | 0.9390 | 0.7875 | 0.9493 | 0.0 | 0.1399 | 0.3157 | 0.0 | nan | 0.7203 | 0.7408 | 0.7738 | 0.4105 | 0.2117 | nan | 0.3182 | 0.4784 | 0.0 | 0.7828 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3859 | 0.0 | 0.0 | 0.6672 | 0.0 | 0.3588 | 0.4378 | 0.0 | nan | 0.0 | 0.2244 | 0.0 | 0.0 | 0.8282 | 0.7032 | 0.9187 | 0.0 | 0.1137 | 0.1958 | 0.0 |
| 0.3638 | 114.81 | 6200 | 0.7997 | 0.2803 | 0.3592 | 0.7547 | nan | 0.8092 | 0.6782 | 0.8102 | 0.9284 | 0.2905 | nan | 0.3691 | 0.6185 | 0.0 | 0.9403 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7520 | 0.0 | 0.0 | 0.8609 | 0.0 | 0.4178 | 0.5567 | 0.0 | nan | 0.0 | 0.3931 | 0.0 | 0.0 | 0.9474 | 0.8770 | 0.9435 | 0.0000 | 0.0667 | 0.2347 | 0.0 | nan | 0.7091 | 0.6261 | 0.7837 | 0.2942 | 0.2753 | nan | 0.2928 | 0.4552 | 0.0 | 0.7808 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3801 | 0.0 | 0.0 | 0.6648 | 0.0 | 0.3421 | 0.4315 | 0.0 | nan | 0.0 | 0.2152 | 0.0 | 0.0 | 0.8297 | 0.7448 | 0.9168 | 0.0000 | 0.0595 | 0.1680 | 0.0 |
| 0.3654 | 116.67 | 6300 | 0.6019 | 0.2956 | 0.3645 | 0.8175 | nan | 0.8244 | 0.8533 | 0.6788 | 0.8927 | 0.3058 | nan | 0.4950 | 0.6003 | 0.0 | 0.9396 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6930 | 0.0 | 0.0 | 0.8964 | 0.0 | 0.3647 | 0.5196 | 0.0 | nan | 0.0 | 0.4113 | 0.0 | 0.0 | 0.9257 | 0.8551 | 0.9594 | 0.0 | 0.1310 | 0.3167 | 0.0 | nan | 0.7337 | 0.7732 | 0.6601 | 0.4748 | 0.2853 | nan | 0.3520 | 0.4685 | 0.0 | 0.7868 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4121 | 0.0 | 0.0 | 0.6708 | 0.0 | 0.3117 | 0.4434 | 0.0 | nan | 0.0 | 0.2326 | 0.0 | 0.0 | 0.8405 | 0.7541 | 0.9187 | 0.0 | 0.1205 | 0.2201 | 0.0 |
| 0.3652 | 118.52 | 6400 | 0.5981 | 0.2967 | 0.3649 | 0.8205 | nan | 0.7551 | 0.8909 | 0.6342 | 0.9054 | 0.3093 | nan | 0.4234 | 0.6313 | 0.0 | 0.9387 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6751 | 0.0 | 0.0 | 0.8700 | 0.0 | 0.4187 | 0.5633 | 0.0 | nan | 0.0 | 0.4465 | 0.0 | 0.0 | 0.9262 | 0.8528 | 0.9534 | 0.0002 | 0.1437 | 0.3398 | 0.0 | nan | 0.6956 | 0.7948 | 0.6246 | 0.4963 | 0.2861 | nan | 0.3171 | 0.4870 | 0.0 | 0.7941 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4467 | 0.0 | 0.0 | 0.6719 | 0.0 | 0.3338 | 0.4473 | 0.0 | nan | 0.0 | 0.2377 | 0.0 | 0.0 | 0.8417 | 0.7531 | 0.9198 | 0.0002 | 0.1302 | 0.2180 | 0.0 |
| 0.3559 | 120.37 | 6500 | 0.5780 | 0.3026 | 0.3668 | 0.8256 | nan | 0.7517 | 0.9024 | 0.8103 | 0.8905 | 0.3788 | nan | 0.3990 | 0.5648 | 0.0 | 0.9522 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6491 | 0.0 | 0.0 | 0.8623 | 0.0 | 0.5208 | 0.5227 | 0.0 | nan | 0.0 | 0.4095 | 0.0 | 0.0 | 0.9315 | 0.8073 | 0.9531 | 0.0 | 0.1367 | 0.2937 | 0.0 | nan | 0.6917 | 0.8084 | 0.7831 | 0.5645 | 0.3365 | nan | 0.3195 | 0.4446 | 0.0 | 0.7603 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4620 | 0.0 | 0.0 | 0.6310 | 0.0 | 0.3859 | 0.4599 | 0.0 | nan | 0.0 | 0.2286 | 0.0 | 0.0 | 0.8329 | 0.7236 | 0.9192 | 0.0 | 0.1259 | 0.2064 | 0.0 |
| 0.3348 | 122.22 | 6600 | 0.5522 | 0.3023 | 0.3735 | 0.8379 | nan | 0.8289 | 0.9088 | 0.6882 | 0.8947 | 0.3594 | nan | 0.4373 | 0.6918 | 0.0 | 0.9448 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7098 | 0.0 | 0.0 | 0.8356 | 0.0 | 0.5156 | 0.5832 | 0.0 | nan | 0.0 | 0.4059 | 0.0 | 0.0 | 0.9417 | 0.8359 | 0.9578 | 0.0009 | 0.1308 | 0.2812 | 0.0 | nan | 0.7433 | 0.8257 | 0.6716 | 0.5930 | 0.3306 | nan | 0.3517 | 0.4956 | 0.0 | 0.7897 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3747 | 0.0 | 0.0 | 0.6736 | 0.0 | 0.3802 | 0.4271 | 0.0 | nan | 0.0 | 0.2180 | 0.0 | 0.0 | 0.8323 | 0.7373 | 0.9200 | 0.0008 | 0.1171 | 0.1906 | 0.0 |
| 0.3653 | 124.07 | 6700 | 0.6070 | 0.2986 | 0.3679 | 0.8216 | nan | 0.6919 | 0.9133 | 0.8114 | 0.8786 | 0.3306 | nan | 0.4558 | 0.6517 | 0.0 | 0.9455 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7183 | 0.0 | 0.0 | 0.8672 | 0.0 | 0.5019 | 0.5472 | 0.0 | nan | 0.0 | 0.4162 | 0.0 | 0.0 | 0.9390 | 0.8019 | 0.9414 | 0.0 | 0.0957 | 0.2664 | 0.0 | nan | 0.6394 | 0.8000 | 0.7821 | 0.6011 | 0.3025 | nan | 0.3359 | 0.4969 | 0.0 | 0.7887 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3803 | 0.0 | 0.0 | 0.6386 | 0.0 | 0.3855 | 0.4427 | 0.0 | nan | 0.0 | 0.2268 | 0.0 | 0.0 | 0.8298 | 0.7136 | 0.9170 | 0.0 | 0.0886 | 0.1861 | 0.0 |
| 0.3216 | 125.93 | 6800 | 0.6091 | 0.3003 | 0.3729 | 0.8176 | nan | 0.8300 | 0.8429 | 0.8233 | 0.9193 | 0.3587 | nan | 0.4900 | 0.6837 | 0.0 | 0.9439 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7272 | 0.0 | 0.0 | 0.8781 | 0.0 | 0.4143 | 0.5307 | 0.0 | nan | 0.0 | 0.4051 | 0.0116 | 0.0 | 0.9314 | 0.8400 | 0.9539 | 0.0 | 0.0921 | 0.2558 | 0.0 | nan | 0.7584 | 0.7706 | 0.7892 | 0.4626 | 0.3268 | nan | 0.3678 | 0.5054 | 0.0 | 0.7811 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3947 | 0.0 | 0.0 | 0.6604 | 0.0 | 0.3306 | 0.4515 | 0.0 | nan | 0.0 | 0.2265 | 0.0116 | 0.0 | 0.8386 | 0.7409 | 0.9204 | 0.0 | 0.0850 | 0.1887 | 0.0 |
| 0.358 | 127.78 | 6900 | 0.5287 | 0.3110 | 0.3729 | 0.8465 | nan | 0.8062 | 0.9359 | 0.8173 | 0.8927 | 0.3346 | nan | 0.4527 | 0.6392 | 0.0 | 0.9354 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6945 | 0.0 | 0.0 | 0.8722 | 0.0 | 0.4896 | 0.5317 | 0.0 | nan | 0.0 | 0.4070 | 0.0 | 0.0 | 0.9436 | 0.8467 | 0.9449 | 0.0 | 0.1243 | 0.2646 | 0.0 | nan | 0.7567 | 0.8356 | 0.7873 | 0.6388 | 0.3087 | nan | 0.3575 | 0.4948 | 0.0 | 0.7958 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4146 | 0.0 | 0.0 | 0.6798 | 0.0 | 0.3797 | 0.4630 | 0.0 | nan | 0.0 | 0.2283 | 0.0 | 0.0 | 0.8356 | 0.7467 | 0.9182 | 0.0 | 0.1175 | 0.1940 | 0.0 |
| 0.3402 | 129.63 | 7000 | 0.6208 | 0.2946 | 0.3637 | 0.8141 | nan | 0.7658 | 0.8754 | 0.8158 | 0.9118 | 0.2322 | nan | 0.4017 | 0.6637 | 0.0 | 0.9438 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6933 | 0.0 | 0.0 | 0.8763 | 0.0 | 0.3895 | 0.5601 | 0.0 | nan | 0.0 | 0.4252 | 0.0043 | 0.0 | 0.9423 | 0.7810 | 0.9448 | 0.0000 | 0.1253 | 0.2865 | 0.0 | nan | 0.7060 | 0.7779 | 0.7885 | 0.4813 | 0.2236 | nan | 0.3133 | 0.4921 | 0.0 | 0.7863 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4236 | 0.0 | 0.0 | 0.6817 | 0.0 | 0.3292 | 0.4440 | 0.0 | nan | 0.0 | 0.2236 | 0.0043 | 0.0 | 0.8247 | 0.6964 | 0.9178 | 0.0000 | 0.1163 | 0.1976 | 0.0 |
| 0.3218 | 131.48 | 7100 | 0.5444 | 0.3108 | 0.3748 | 0.8443 | nan | 0.8296 | 0.9244 | 0.8276 | 0.8878 | 0.2774 | nan | 0.4782 | 0.6750 | 0.0 | 0.9366 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6983 | 0.0 | 0.0 | 0.8664 | 0.0 | 0.4743 | 0.5451 | 0.0 | nan | 0.0 | 0.4187 | 0.0113 | 0.0 | 0.9391 | 0.8642 | 0.9558 | 0.0 | 0.1166 | 0.2684 | 0.0 | nan | 0.7636 | 0.8260 | 0.7984 | 0.6281 | 0.2647 | nan | 0.3705 | 0.5066 | 0.0 | 0.8001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4217 | 0.0 | 0.0 | 0.6783 | 0.0 | 0.3686 | 0.4581 | 0.0 | nan | 0.0 | 0.2178 | 0.0113 | 0.0 | 0.8396 | 0.7666 | 0.9213 | 0.0 | 0.1113 | 0.1943 | 0.0 |
| 0.3413 | 133.33 | 7200 | 0.5473 | 0.3063 | 0.3680 | 0.8412 | nan | 0.8038 | 0.9272 | 0.7396 | 0.8885 | 0.2742 | nan | 0.4489 | 0.5761 | 0.0 | 0.9434 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6970 | 0.0 | 0.0 | 0.8722 | 0.0 | 0.5185 | 0.5545 | 0.0 | nan | 0.0 | 0.4060 | 0.0241 | 0.0 | 0.9384 | 0.8611 | 0.9453 | 0.0 | 0.1082 | 0.2489 | 0.0 | nan | 0.7450 | 0.8245 | 0.7280 | 0.6104 | 0.2595 | nan | 0.3532 | 0.4660 | 0.0 | 0.7846 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4313 | 0.0 | 0.0 | 0.6807 | 0.0 | 0.3896 | 0.4684 | 0.0 | nan | 0.0 | 0.2284 | 0.0241 | 0.0 | 0.8397 | 0.7610 | 0.9186 | 0.0 | 0.1022 | 0.1871 | 0.0 |
| 0.3463 | 135.19 | 7300 | 0.6341 | 0.2922 | 0.3603 | 0.8106 | nan | 0.8087 | 0.8519 | 0.8052 | 0.9145 | 0.2425 | nan | 0.3711 | 0.5676 | 0.0 | 0.9336 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7046 | 0.0 | 0.0 | 0.8888 | 0.0 | 0.3923 | 0.5815 | 0.0 | nan | 0.0 | 0.4055 | 0.0319 | 0.0 | 0.9344 | 0.8036 | 0.9503 | 0.0 | 0.1152 | 0.2276 | 0.0 | nan | 0.7410 | 0.7674 | 0.7870 | 0.4522 | 0.2330 | nan | 0.3152 | 0.4495 | 0.0 | 0.7851 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4247 | 0.0 | 0.0 | 0.6553 | 0.0 | 0.3108 | 0.4330 | 0.0 | nan | 0.0 | 0.2290 | 0.0319 | 0.0 | 0.8273 | 0.7106 | 0.9198 | 0.0 | 0.1051 | 0.1720 | 0.0 |
| 0.317 | 137.04 | 7400 | 0.5689 | 0.2996 | 0.3673 | 0.8346 | nan | 0.8380 | 0.9048 | 0.7202 | 0.8874 | 0.2300 | nan | 0.4682 | 0.6001 | 0.0 | 0.9282 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7278 | 0.0 | 0.0 | 0.8811 | 0.0 | 0.4430 | 0.5714 | 0.0 | nan | 0.0 | 0.4115 | 0.0148 | 0.0 | 0.9311 | 0.8477 | 0.9517 | 0.0 | 0.1019 | 0.2961 | 0.0 | nan | 0.7600 | 0.8107 | 0.7092 | 0.5843 | 0.2243 | nan | 0.3634 | 0.4741 | 0.0 | 0.7839 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3683 | 0.0 | 0.0 | 0.6667 | 0.0 | 0.3433 | 0.4519 | 0.0 | nan | 0.0 | 0.2331 | 0.0148 | 0.0 | 0.8387 | 0.7448 | 0.9201 | 0.0 | 0.0930 | 0.2020 | 0.0 |
| 0.3241 | 138.89 | 7500 | 0.5921 | 0.3030 | 0.3698 | 0.8264 | nan | 0.7560 | 0.9038 | 0.8054 | 0.8993 | 0.2921 | nan | 0.4358 | 0.6497 | 0.0 | 0.9426 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6843 | 0.0 | 0.0 | 0.8596 | 0.0 | 0.4666 | 0.5531 | 0.0 | nan | 0.0014 | 0.4125 | 0.0280 | 0.0 | 0.9419 | 0.8345 | 0.9468 | 0.0005 | 0.1478 | 0.2726 | 0.0 | nan | 0.6935 | 0.8021 | 0.7869 | 0.5437 | 0.2719 | nan | 0.3428 | 0.4933 | 0.0 | 0.7917 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4134 | 0.0 | 0.0 | 0.6707 | 0.0 | 0.3632 | 0.4528 | 0.0 | nan | 0.0014 | 0.2150 | 0.0280 | 0.0 | 0.8367 | 0.7422 | 0.9203 | 0.0005 | 0.1346 | 0.1914 | 0.0 |
| 0.3341 | 140.74 | 7600 | 0.5641 | 0.3038 | 0.3702 | 0.8325 | nan | 0.7624 | 0.9172 | 0.8114 | 0.8959 | 0.2940 | nan | 0.5063 | 0.6105 | 0.0 | 0.9434 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7179 | 0.0 | 0.0 | 0.8732 | 0.0 | 0.5230 | 0.5420 | 0.0 | nan | 0.0 | 0.4148 | 0.0425 | 0.0 | 0.9411 | 0.7719 | 0.9528 | 0.0 | 0.0840 | 0.2431 | 0.0 | nan | 0.7064 | 0.8174 | 0.7877 | 0.6132 | 0.2760 | nan | 0.3594 | 0.4823 | 0.0 | 0.7859 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4116 | 0.0 | 0.0 | 0.6715 | 0.0 | 0.3953 | 0.4613 | 0.0 | nan | 0.0 | 0.2236 | 0.0425 | 0.0 | 0.8241 | 0.6840 | 0.9219 | 0.0 | 0.0790 | 0.1794 | 0.0 |
| 0.3135 | 142.59 | 7700 | 0.5712 | 0.3062 | 0.3709 | 0.8300 | nan | 0.7952 | 0.8986 | 0.8100 | 0.8619 | 0.3084 | nan | 0.4715 | 0.6006 | 0.0 | 0.9439 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6837 | 0.0 | 0.0 | 0.8669 | 0.0 | 0.5083 | 0.5475 | 0.0 | nan | 0.0 | 0.4053 | 0.0384 | 0.0 | 0.9443 | 0.8124 | 0.9524 | 0.0 | 0.1181 | 0.3029 | 0.0 | nan | 0.7270 | 0.8042 | 0.7907 | 0.5385 | 0.2877 | nan | 0.3610 | 0.4689 | 0.0 | 0.7784 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4431 | 0.0 | 0.0 | 0.6764 | 0.0 | 0.3905 | 0.4659 | 0.0 | nan | 0.0 | 0.2280 | 0.0384 | 0.0 | 0.8312 | 0.7224 | 0.9227 | 0.0 | 0.1114 | 0.2117 | 0.0 |
| 0.2985 | 144.44 | 7800 | 0.5705 | 0.3063 | 0.3739 | 0.8331 | nan | 0.7844 | 0.9061 | 0.8011 | 0.8987 | 0.3105 | nan | 0.4674 | 0.6336 | 0.0 | 0.9448 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7174 | 0.0 | 0.0 | 0.8645 | 0.0 | 0.4836 | 0.5414 | 0.0 | nan | 0.0 | 0.4277 | 0.0445 | 0.0 | 0.9390 | 0.8448 | 0.9518 | 0.0003 | 0.1004 | 0.3014 | 0.0 | nan | 0.7238 | 0.8110 | 0.7871 | 0.5506 | 0.2869 | nan | 0.3545 | 0.4901 | 0.0 | 0.7879 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4047 | 0.0 | 0.0 | 0.6872 | 0.0 | 0.3776 | 0.4572 | 0.0 | nan | 0.0 | 0.2263 | 0.0445 | 0.0 | 0.8392 | 0.7464 | 0.9226 | 0.0003 | 0.0950 | 0.2101 | 0.0 |
| 0.3083 | 146.3 | 7900 | 0.6255 | 0.3029 | 0.3735 | 0.8173 | nan | 0.7919 | 0.8576 | 0.8118 | 0.9101 | 0.3017 | nan | 0.4374 | 0.6462 | 0.0 | 0.9461 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7137 | 0.0 | 0.0 | 0.8706 | 0.0 | 0.5111 | 0.5445 | 0.0 | nan | 0.0001 | 0.4282 | 0.0589 | 0.0 | 0.9317 | 0.8537 | 0.9628 | 0.0000 | 0.1030 | 0.2713 | 0.0 | nan | 0.7389 | 0.7675 | 0.7857 | 0.4623 | 0.2774 | nan | 0.3477 | 0.4815 | 0.0 | 0.7777 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4220 | 0.0 | 0.0 | 0.6797 | 0.0 | 0.3926 | 0.4652 | 0.0 | nan | 0.0001 | 0.2292 | 0.0588 | 0.0 | 0.8421 | 0.7549 | 0.9219 | 0.0000 | 0.0939 | 0.1926 | 0.0 |
| 0.3132 | 148.15 | 8000 | 0.6407 | 0.2987 | 0.3697 | 0.8084 | nan | 0.8056 | 0.8366 | 0.8045 | 0.9187 | 0.2881 | nan | 0.3901 | 0.6494 | 0.0 | 0.9456 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7065 | 0.0 | 0.0 | 0.8674 | 0.0 | 0.4835 | 0.5578 | 0.0 | nan | 0.0 | 0.4107 | 0.0690 | 0.0 | 0.9364 | 0.8069 | 0.9579 | 0.0 | 0.1392 | 0.2549 | 0.0 | nan | 0.7400 | 0.7511 | 0.7860 | 0.4288 | 0.2705 | nan | 0.3211 | 0.4907 | 0.0 | 0.7845 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4064 | 0.0 | 0.0 | 0.6776 | 0.0 | 0.3750 | 0.4463 | 0.0 | nan | 0.0 | 0.2323 | 0.0689 | 0.0 | 0.8346 | 0.7221 | 0.9215 | 0.0 | 0.1189 | 0.1827 | 0.0 |
| 0.3227 | 150.0 | 8100 | 0.6215 | 0.3010 | 0.3747 | 0.8154 | nan | 0.8072 | 0.8523 | 0.7987 | 0.9122 | 0.3387 | nan | 0.4049 | 0.6521 | 0.0 | 0.9464 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7268 | 0.0 | 0.0 | 0.8526 | 0.0 | 0.5301 | 0.5632 | 0.0 | nan | 0.0015 | 0.4353 | 0.0597 | 0.0 | 0.9352 | 0.8036 | 0.9574 | 0.0 | 0.1202 | 0.2916 | 0.0 | nan | 0.7319 | 0.7712 | 0.7839 | 0.4639 | 0.3115 | nan | 0.3235 | 0.4815 | 0.0 | 0.7813 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3954 | 0.0 | 0.0 | 0.6800 | 0.0 | 0.3930 | 0.4522 | 0.0 | nan | 0.0015 | 0.2349 | 0.0596 | 0.0 | 0.8319 | 0.7106 | 0.9225 | 0.0 | 0.1071 | 0.1947 | 0.0 |
| 0.3041 | 151.85 | 8200 | 0.6365 | 0.2982 | 0.3695 | 0.8091 | nan | 0.7813 | 0.8516 | 0.8100 | 0.9057 | 0.2989 | nan | 0.4138 | 0.6557 | 0.0 | 0.9422 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7155 | 0.0 | 0.0 | 0.8717 | 0.0 | 0.5273 | 0.5454 | 0.0 | nan | 0.0 | 0.4293 | 0.0595 | 0.0 | 0.9354 | 0.7484 | 0.9557 | 0.0 | 0.1301 | 0.2483 | 0.0 | nan | 0.7117 | 0.7612 | 0.7891 | 0.4543 | 0.2787 | nan | 0.3305 | 0.4950 | 0.0 | 0.7874 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4007 | 0.0 | 0.0 | 0.6772 | 0.0 | 0.3923 | 0.4632 | 0.0 | nan | 0.0 | 0.2342 | 0.0594 | 0.0 | 0.8230 | 0.6691 | 0.9227 | 0.0 | 0.1142 | 0.1800 | 0.0 |
| 0.3295 | 153.7 | 8300 | 0.5763 | 0.3064 | 0.3745 | 0.8319 | nan | 0.8091 | 0.9000 | 0.8155 | 0.8927 | 0.3048 | nan | 0.4385 | 0.6734 | 0.0 | 0.9391 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7114 | 0.0 | 0.0 | 0.8707 | 0.0 | 0.4884 | 0.5694 | 0.0 | nan | 0.0032 | 0.4179 | 0.0581 | 0.0 | 0.9385 | 0.8107 | 0.9552 | 0.0006 | 0.1316 | 0.2550 | 0.0 | nan | 0.7460 | 0.8059 | 0.7926 | 0.5582 | 0.2844 | nan | 0.3545 | 0.5009 | 0.0 | 0.7892 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4184 | 0.0 | 0.0 | 0.6741 | 0.0 | 0.3769 | 0.4455 | 0.0 | nan | 0.0032 | 0.2317 | 0.0581 | 0.0 | 0.8317 | 0.7120 | 0.9232 | 0.0005 | 0.1162 | 0.1807 | 0.0 |
| 0.3057 | 155.56 | 8400 | 0.6602 | 0.2967 | 0.3669 | 0.8053 | nan | 0.7862 | 0.8400 | 0.8012 | 0.9083 | 0.2761 | nan | 0.3977 | 0.6548 | 0.0 | 0.9399 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7262 | 0.0 | 0.0 | 0.8830 | 0.0 | 0.4582 | 0.5390 | 0.0 | nan | 0.0 | 0.4382 | 0.0696 | 0.0 | 0.9380 | 0.7676 | 0.9517 | 0.0 | 0.1204 | 0.2454 | 0.0 | nan | 0.7257 | 0.7493 | 0.7832 | 0.4331 | 0.2603 | nan | 0.3344 | 0.4909 | 0.0 | 0.7899 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4164 | 0.0 | 0.0 | 0.6631 | 0.0 | 0.3619 | 0.4610 | 0.0 | nan | 0.0 | 0.2358 | 0.0695 | 0.0 | 0.8268 | 0.6858 | 0.9224 | 0.0 | 0.1038 | 0.1798 | 0.0 |
| 0.3152 | 157.41 | 8500 | 0.6195 | 0.2986 | 0.3661 | 0.8115 | nan | 0.7876 | 0.8570 | 0.7994 | 0.8920 | 0.2891 | nan | 0.4035 | 0.6056 | 0.0 | 0.9417 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7090 | 0.0 | 0.0 | 0.8719 | 0.0 | 0.4959 | 0.5413 | 0.0 | nan | 0.0 | 0.4136 | 0.0566 | 0.0 | 0.9414 | 0.7717 | 0.9517 | 0.0 | 0.1198 | 0.2672 | 0.0 | nan | 0.7263 | 0.7633 | 0.7814 | 0.4550 | 0.2715 | nan | 0.3352 | 0.4721 | 0.0 | 0.7820 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4233 | 0.0 | 0.0 | 0.6671 | 0.0 | 0.3757 | 0.4677 | 0.0 | nan | 0.0 | 0.2407 | 0.0565 | 0.0 | 0.8255 | 0.6891 | 0.9216 | 0.0 | 0.1083 | 0.1912 | 0.0 |
| 0.3041 | 159.26 | 8600 | 0.5761 | 0.3071 | 0.3735 | 0.8297 | nan | 0.8077 | 0.8910 | 0.8053 | 0.8839 | 0.3353 | nan | 0.4603 | 0.6015 | 0.0 | 0.9489 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6966 | 0.0 | 0.0 | 0.8701 | 0.0 | 0.4933 | 0.5427 | 0.0 | nan | 0.0082 | 0.4481 | 0.0761 | 0.0 | 0.9301 | 0.8454 | 0.9544 | 0.0005 | 0.1062 | 0.2469 | 0.0 | nan | 0.7406 | 0.7982 | 0.7855 | 0.5184 | 0.3024 | nan | 0.3652 | 0.4669 | 0.0 | 0.7807 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4413 | 0.0 | 0.0 | 0.6853 | 0.0 | 0.3815 | 0.4553 | 0.0 | nan | 0.0082 | 0.2312 | 0.0759 | 0.0 | 0.8414 | 0.7507 | 0.9229 | 0.0005 | 0.0961 | 0.1775 | 0.0 |
| 0.3185 | 161.11 | 8700 | 0.5760 | 0.3058 | 0.3698 | 0.8296 | nan | 0.8094 | 0.8946 | 0.7956 | 0.8887 | 0.2897 | nan | 0.4223 | 0.5895 | 0.0 | 0.9357 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6889 | 0.0 | 0.0 | 0.8908 | 0.0 | 0.4640 | 0.5538 | 0.0 | nan | 0.0 | 0.4239 | 0.0692 | 0.0 | 0.9305 | 0.8418 | 0.9519 | 0.0001 | 0.1431 | 0.2510 | 0.0 | nan | 0.7455 | 0.7997 | 0.7789 | 0.5321 | 0.2717 | nan | 0.3473 | 0.4756 | 0.0 | 0.8013 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4311 | 0.0 | 0.0 | 0.6576 | 0.0 | 0.3605 | 0.4511 | 0.0 | nan | 0.0 | 0.2412 | 0.0691 | 0.0 | 0.8410 | 0.7459 | 0.9223 | 0.0001 | 0.1284 | 0.1839 | 0.0 |
| 0.2908 | 162.96 | 8800 | 0.5655 | 0.3075 | 0.3717 | 0.8316 | nan | 0.8548 | 0.8841 | 0.7997 | 0.8745 | 0.3118 | nan | 0.4610 | 0.6024 | 0.0 | 0.9410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6931 | 0.0 | 0.0 | 0.8861 | 0.0 | 0.4534 | 0.5383 | 0.0 | nan | 0.0015 | 0.4266 | 0.0689 | 0.0 | 0.9366 | 0.8053 | 0.9554 | 0.0 | 0.1346 | 0.2641 | 0.0 | nan | 0.7595 | 0.8021 | 0.7817 | 0.5396 | 0.2919 | nan | 0.3717 | 0.4720 | 0.0 | 0.7905 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4462 | 0.0 | 0.0 | 0.6634 | 0.0 | 0.3562 | 0.4639 | 0.0 | nan | 0.0015 | 0.2393 | 0.0688 | 0.0 | 0.8346 | 0.7212 | 0.9232 | 0.0 | 0.1193 | 0.1923 | 0.0 |
| 0.3137 | 164.81 | 8900 | 0.5829 | 0.3094 | 0.3784 | 0.8279 | nan | 0.8476 | 0.8674 | 0.8118 | 0.9018 | 0.3237 | nan | 0.4801 | 0.6610 | 0.0 | 0.9387 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6851 | 0.0 | 0.0 | 0.8696 | 0.0 | 0.5109 | 0.5681 | 0.0 | nan | 0.0260 | 0.4276 | 0.0709 | 0.0 | 0.9330 | 0.8416 | 0.9554 | 0.0012 | 0.1333 | 0.2547 | 0.0 | nan | 0.7562 | 0.7893 | 0.7902 | 0.5123 | 0.3055 | nan | 0.3768 | 0.4921 | 0.0 | 0.7978 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 0.0 | 0.0 | 0.6754 | 0.0 | 0.3867 | 0.4408 | 0.0 | nan | 0.0260 | 0.2316 | 0.0708 | 0.0 | 0.8396 | 0.7418 | 0.9237 | 0.0010 | 0.1173 | 0.1797 | 0.0 |
| 0.3219 | 166.67 | 9000 | 0.5812 | 0.3065 | 0.3750 | 0.8278 | nan | 0.8354 | 0.8788 | 0.8041 | 0.8834 | 0.2990 | nan | 0.4594 | 0.6655 | 0.0 | 0.9395 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6980 | 0.0 | 0.0 | 0.8601 | 0.0 | 0.5069 | 0.5685 | 0.0 | nan | 0.0113 | 0.4156 | 0.0664 | 0.0 | 0.9440 | 0.8108 | 0.9521 | 0.0001 | 0.1291 | 0.2716 | 0.0 | nan | 0.7565 | 0.7902 | 0.7828 | 0.5219 | 0.2845 | nan | 0.3688 | 0.4922 | 0.0 | 0.7966 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4240 | 0.0 | 0.0 | 0.6768 | 0.0 | 0.3877 | 0.4481 | 0.0 | nan | 0.0113 | 0.2327 | 0.0664 | 0.0 | 0.8308 | 0.7154 | 0.9230 | 0.0001 | 0.1124 | 0.1869 | 0.0 |
| 0.3181 | 168.52 | 9100 | 0.5632 | 0.3112 | 0.3765 | 0.8367 | nan | 0.8125 | 0.9072 | 0.8124 | 0.8963 | 0.3044 | nan | 0.4647 | 0.6697 | 0.0 | 0.9359 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6879 | 0.0 | 0.0 | 0.8771 | 0.0 | 0.5085 | 0.5560 | 0.0 | nan | 0.0039 | 0.4244 | 0.0703 | 0.0 | 0.9367 | 0.8280 | 0.9532 | 0.0 | 0.1309 | 0.2672 | 0.0 | nan | 0.7474 | 0.8113 | 0.7892 | 0.5707 | 0.2882 | nan | 0.3704 | 0.5031 | 0.0 | 0.7988 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4314 | 0.0 | 0.0 | 0.6778 | 0.0 | 0.3900 | 0.4604 | 0.0 | nan | 0.0039 | 0.2372 | 0.0702 | 0.0 | 0.8390 | 0.7407 | 0.9234 | 0.0 | 0.1173 | 0.1872 | 0.0 |
| 0.3009 | 170.37 | 9200 | 0.5671 | 0.3095 | 0.3743 | 0.8326 | nan | 0.7939 | 0.9018 | 0.7926 | 0.8902 | 0.3160 | nan | 0.4603 | 0.6415 | 0.0 | 0.9414 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6804 | 0.0 | 0.0 | 0.8815 | 0.0 | 0.4974 | 0.5528 | 0.0 | nan | 0.0000 | 0.4233 | 0.0749 | 0.0 | 0.9339 | 0.8322 | 0.9566 | 0.0 | 0.1296 | 0.2770 | 0.0 | nan | 0.7279 | 0.8041 | 0.7736 | 0.5652 | 0.2951 | nan | 0.3698 | 0.4960 | 0.0 | 0.7938 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4395 | 0.0 | 0.0 | 0.6714 | 0.0 | 0.3837 | 0.4627 | 0.0 | nan | 0.0000 | 0.2368 | 0.0747 | 0.0 | 0.8379 | 0.7389 | 0.9235 | 0.0 | 0.1161 | 0.1946 | 0.0 |
| 0.2873 | 172.22 | 9300 | 0.6113 | 0.3047 | 0.3720 | 0.8176 | nan | 0.8107 | 0.8536 | 0.7603 | 0.8949 | 0.3232 | nan | 0.4761 | 0.6422 | 0.0 | 0.9415 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6799 | 0.0 | 0.0 | 0.8720 | 0.0 | 0.5023 | 0.5457 | 0.0 | nan | 0.0034 | 0.4146 | 0.0717 | 0.0 | 0.9439 | 0.8035 | 0.9521 | 0.0 | 0.1299 | 0.2839 | 0.0 | nan | 0.7355 | 0.7675 | 0.7422 | 0.4826 | 0.3027 | nan | 0.3715 | 0.4933 | 0.0 | 0.7896 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4421 | 0.0 | 0.0 | 0.6666 | 0.0 | 0.3881 | 0.4723 | 0.0 | nan | 0.0034 | 0.2350 | 0.0716 | 0.0 | 0.8305 | 0.7183 | 0.9229 | 0.0 | 0.1152 | 0.1992 | 0.0 |
| 0.2856 | 174.07 | 9400 | 0.6091 | 0.3045 | 0.3713 | 0.8183 | nan | 0.8177 | 0.8508 | 0.7884 | 0.9070 | 0.3274 | nan | 0.4412 | 0.5971 | 0.0 | 0.9437 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6904 | 0.0 | 0.0 | 0.8760 | 0.0 | 0.5037 | 0.5471 | 0.0 | nan | 0.0023 | 0.4093 | 0.0729 | 0.0 | 0.9395 | 0.8289 | 0.9513 | 0.0000 | 0.1123 | 0.2745 | 0.0 | nan | 0.7401 | 0.7694 | 0.7705 | 0.4745 | 0.3070 | nan | 0.3570 | 0.4797 | 0.0 | 0.7901 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4370 | 0.0 | 0.0 | 0.6642 | 0.0 | 0.3879 | 0.4663 | 0.0 | nan | 0.0023 | 0.2356 | 0.0728 | 0.0 | 0.8358 | 0.7333 | 0.9230 | 0.0000 | 0.1034 | 0.1937 | 0.0 |
| 0.2803 | 175.93 | 9500 | 0.6404 | 0.3009 | 0.3704 | 0.8084 | nan | 0.8365 | 0.8208 | 0.7833 | 0.9062 | 0.3050 | nan | 0.4405 | 0.6203 | 0.0 | 0.9443 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6940 | 0.0 | 0.0 | 0.8667 | 0.0 | 0.5055 | 0.5494 | 0.0 | nan | 0.0084 | 0.4148 | 0.0772 | 0.0 | 0.9424 | 0.8074 | 0.9551 | 0.0001 | 0.1077 | 0.2664 | 0.0 | nan | 0.7454 | 0.7459 | 0.7680 | 0.4316 | 0.2897 | nan | 0.3571 | 0.4866 | 0.0 | 0.7930 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4255 | 0.0 | 0.0 | 0.6652 | 0.0 | 0.3877 | 0.4601 | 0.0 | nan | 0.0084 | 0.2306 | 0.0771 | 0.0 | 0.8314 | 0.7178 | 0.9235 | 0.0001 | 0.0969 | 0.1889 | 0.0 |
| 0.2924 | 177.78 | 9600 | 0.6156 | 0.3045 | 0.3723 | 0.8156 | nan | 0.8293 | 0.8420 | 0.8051 | 0.8964 | 0.3365 | nan | 0.4651 | 0.6281 | 0.0 | 0.9443 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6806 | 0.0 | 0.0 | 0.8777 | 0.0 | 0.4957 | 0.5434 | 0.0 | nan | 0.0043 | 0.4293 | 0.0774 | 0.0 | 0.9387 | 0.7942 | 0.9562 | 0.0 | 0.1178 | 0.2514 | 0.0 | nan | 0.7508 | 0.7606 | 0.7848 | 0.4617 | 0.3134 | nan | 0.3712 | 0.4903 | 0.0 | 0.7912 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4384 | 0.0 | 0.0 | 0.6666 | 0.0 | 0.3850 | 0.4648 | 0.0 | nan | 0.0043 | 0.2308 | 0.0773 | 0.0 | 0.8320 | 0.7126 | 0.9232 | 0.0 | 0.1028 | 0.1836 | 0.0 |
| 0.2911 | 179.63 | 9700 | 0.6039 | 0.3051 | 0.3743 | 0.8197 | nan | 0.8161 | 0.8573 | 0.8009 | 0.9013 | 0.3091 | nan | 0.4597 | 0.6407 | 0.0 | 0.9406 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7191 | 0.0 | 0.0 | 0.8787 | 0.0 | 0.5007 | 0.5561 | 0.0 | nan | 0.0046 | 0.4187 | 0.0825 | 0.0 | 0.9325 | 0.8335 | 0.9578 | 0.0000 | 0.1036 | 0.2642 | 0.0 | nan | 0.7434 | 0.7687 | 0.7825 | 0.4751 | 0.2917 | nan | 0.3667 | 0.4994 | 0.0 | 0.7998 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4127 | 0.0 | 0.0 | 0.6761 | 0.0 | 0.3878 | 0.4561 | 0.0 | nan | 0.0046 | 0.2352 | 0.0823 | 0.0 | 0.8393 | 0.7401 | 0.9235 | 0.0000 | 0.0883 | 0.1885 | 0.0 |
| 0.3093 | 181.48 | 9800 | 0.6244 | 0.3021 | 0.3707 | 0.8132 | nan | 0.8240 | 0.8367 | 0.7819 | 0.9031 | 0.3158 | nan | 0.4523 | 0.6336 | 0.0 | 0.9419 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7047 | 0.0 | 0.0 | 0.8782 | 0.0 | 0.5024 | 0.5478 | 0.0 | nan | 0.0 | 0.4039 | 0.0761 | 0.0 | 0.9422 | 0.8036 | 0.9524 | 0.0 | 0.0992 | 0.2629 | 0.0 | nan | 0.7414 | 0.7575 | 0.7666 | 0.4537 | 0.2990 | nan | 0.3642 | 0.4913 | 0.0 | 0.7906 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4261 | 0.0 | 0.0 | 0.6655 | 0.0 | 0.3892 | 0.4639 | 0.0 | nan | 0.0 | 0.2339 | 0.0760 | 0.0 | 0.8311 | 0.7168 | 0.9226 | 0.0 | 0.0873 | 0.1892 | 0.0 |
| 0.3194 | 183.33 | 9900 | 0.6384 | 0.3015 | 0.3707 | 0.8106 | nan | 0.8269 | 0.8295 | 0.7809 | 0.9036 | 0.3169 | nan | 0.4373 | 0.6407 | 0.0 | 0.9394 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7004 | 0.0 | 0.0 | 0.8774 | 0.0 | 0.4936 | 0.5511 | 0.0 | nan | 0.0004 | 0.4210 | 0.0726 | 0.0 | 0.9434 | 0.8072 | 0.9462 | 0.0 | 0.1149 | 0.2605 | 0.0 | nan | 0.7423 | 0.7508 | 0.7639 | 0.4418 | 0.2988 | nan | 0.3584 | 0.4963 | 0.0 | 0.7976 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4212 | 0.0 | 0.0 | 0.6662 | 0.0 | 0.3830 | 0.4618 | 0.0 | nan | 0.0004 | 0.2347 | 0.0725 | 0.0 | 0.8311 | 0.7208 | 0.9214 | 0.0 | 0.0993 | 0.1875 | 0.0 |
| 0.3174 | 185.19 | 10000 | 0.6350 | 0.3022 | 0.3724 | 0.8117 | nan | 0.8240 | 0.8308 | 0.7789 | 0.9052 | 0.3152 | nan | 0.4703 | 0.6444 | 0.0 | 0.9424 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7116 | 0.0 | 0.0 | 0.8716 | 0.0 | 0.4736 | 0.5408 | 0.0 | nan | 0.0048 | 0.4202 | 0.0754 | 0.0 | 0.9437 | 0.8196 | 0.9525 | 0.0 | 0.1041 | 0.2872 | 0.0 | nan | 0.7413 | 0.7520 | 0.7629 | 0.4453 | 0.2976 | nan | 0.3701 | 0.4953 | 0.0 | 0.7962 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4152 | 0.0 | 0.0 | 0.6712 | 0.0 | 0.3749 | 0.4613 | 0.0 | nan | 0.0048 | 0.2337 | 0.0753 | 0.0 | 0.8324 | 0.7277 | 0.9234 | 0.0 | 0.0913 | 0.1997 | 0.0 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
bcd67bfd3b7536cb7108dd262af7691c
|
mkhairil/autotrain-text-sentiment-indonlu-smse-2885384370
|
mkhairil
|
bert
| 8 | 49 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['unk', 'id']
|
['mkhairil/autotrain-data-text-sentiment-indonlu-smse']
|
{'emissions': 5.395117116799661}
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['autotrain', 'text-classification']
| false | true | true | 1,265 | false |
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- fine tuned with indonlp/indonlu dataset. (10000 rows from https://huggingface.co/datasets/indonlp/indonlu/viewer/smsa/train)
- Model ID: 2885384370
- CO2 Emissions (in grams): 5.3951
## Validation Metrics
- Loss: 0.270
- Accuracy: 0.900
- Macro F1: 0.866
- Micro F1: 0.900
- Weighted F1: 0.899
- Macro Precision: 0.874
- Micro Precision: 0.900
- Weighted Precision: 0.899
- Macro Recall: 0.859
- Micro Recall: 0.900
- Weighted Recall: 0.900
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/mkhairil/autotrain-text-sentiment-indonlu-smse-2885384370
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("mkhairil/autotrain-text-sentiment-indonlu-smse-2885384370", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("mkhairil/autotrain-text-sentiment-indonlu-smse-2885384370", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
200b8d91ed50cc92bd9df8466cb35e3c
|
gokuls/distilbert_add_GLUE_Experiment_logit_kd_qnli_256
|
gokuls
|
distilbert
| 17 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,752 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_logit_kd_qnli_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3989
- Accuracy: 0.5874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4156 | 1.0 | 410 | 0.4111 | 0.5054 |
| 0.4078 | 2.0 | 820 | 0.4018 | 0.5799 |
| 0.3962 | 3.0 | 1230 | 0.3989 | 0.5874 |
| 0.3899 | 4.0 | 1640 | 0.4018 | 0.5867 |
| 0.3851 | 5.0 | 2050 | 0.4032 | 0.5799 |
| 0.3802 | 6.0 | 2460 | 0.4118 | 0.5728 |
| 0.3762 | 7.0 | 2870 | 0.4093 | 0.5718 |
| 0.3717 | 8.0 | 3280 | 0.4100 | 0.5737 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
04091592db31fd9b89de156f45235ecb
|
DrishtiSharma/whisper-large-v2-kk-v1
|
DrishtiSharma
|
whisper
| 15 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['kk']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,313 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large Kazakh - Drishti Sharma
This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4794
- Wer: 35.5486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0007 | 10.0 | 1000 | 0.4794 | 35.5486 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
7667c53bd3bf3c3fdbd9f1ffaaa5e962
|
seonghyeonye/flipped_3B
|
seonghyeonye
|
t5
| 9 | 9 |
transformers
| 3 |
text2text-generation
| true | false | false |
apache-2.0
|
['en']
|
['bigscience/P3']
| null | 2 | 0 | 2 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 5,687 | false |
**Official repository**: [seonghyeonye/Flipped-Learning](https://github.com/seonghyeonye/Flipped-Learning)
# Model Description
FLIPPED uses a unique meta-learning method to show zero-shot task generalization on classification natural language prompts, outperforming GPT-3 and T0-11B on many tasks with a 4x smaller scale.
It is a series of encoder-decoder model trained on a numerous classification dataset. We show inputs and its corresponding outputs of each instances in each dataset to FLIPPED, and train it to generate its possible instruction. We add unlikelihood loss in order **not** to generate the instruction when given the same input, but a wrong output. To obtain FLIPPED, we fine-tune a T5 model in a given scale on a multitask mixture covering many different classification NLP tasks.
# Intended uses
You can use the models to perform inference on tasks by specifying your input-output NLP query in a "input: {input}\noutput: {output}" form , and the model will predict the instruction. For example, You can try
*"input: <extra_id_0> this is the best cast iron skillet you will ever buy<extra_id_1>\noutput: Positive"*
as an input, and the model will hopefully generate *"Title: Review:"*.
# How to use
Our overall explanation models along with ablations can be found in our [paper](https://arxiv.org/abs/2210.02969). We recommend using the [FLIPPED-11B](seonghyeonye/flipped_11B) checkpoint as it leads (on average) to the best performances on a variety of NLP tasks.
|Model|Number of parameters|
|-|-|
|[Flipped_11B](https://huggingface.co/seonghyeonye/flipped_11B)|11 billion|
|[Flipped_3B](https://huggingface.co/seonghyeonye/flipped_3B)|3 billion|
Here is how to download the model in PyTorch:
```python
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("seonghyeonye/flipped_3B")
tokenizer = T5Tokenizer.from_pretrained("seonghyeonye/flipped_3B")
```
If you want to use another checkpoint, please replace the path in `T5Tokenizer` and `T5ForConditionalGeneration`.
We also provide a quick [Jupyter Notebook](https://github.com/seonghyeonye/Flipped-Learning/blob/master/flipped_inference.ipynb) where you can inference with our method.
**Note: the model was trained with fp32 activations. As such, we highly discourage running inference with fp16.**
# Training procedure
FLIPPED models are based on [T5](https://huggingface.co/google/t5-v1_1-xl), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on [C4](https://huggingface.co/datasets/c4).
At a high level, the input text along with output label is fed to the encoder and the instruction text is produced by the decoder. The model is fine-tuned to autoregressively generate the target. We also feed input text along with a wrong input, adding an unlikelihood loss in order not to make model produce the proper instruction in that case. Here are our training details.
Training details:
- Fine-tuning steps: 5'000
- Input sequence length: 512
- Target sequence length: 128
- Batch size: 240
- Optimizer: Adafactor
- Learning rate: 5e-5
- Dropout: 0.1
- Sampling strategy: proportional to the number of examples in each dataset (we randomly sampled any dataset if it has over 500'000 examples so that it has at most 500'000 examples. Also, we randomly choose which instruction to generate for each training steps, so ideally each instruction appears *num_examples/num_templates* while training.)
# Training data
We trained different variants T0 with different mixtures of datasets.
|Model|Training datasets|
|--|--|
|FLIPPED_11B|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Topic Classification: AG News, DBPedia<br>- Paraphrase Identification: MRPC, PAWS, QQP|
|FLIPPED_3B|Same as FLIPPED_11B|
We only choose prompts examples that has output lables, which can be found on the dataset page.
# Evaluation data
We evaluate our models on following datasets:
|Task category|Datasets|
|-|-|
|Natural language inference|ANLI(R1, R2, R3), CB, RTE|
|Coreference resolution|WSC, Winogrande|
|Word sense disambiguation|WiC|
|Sentence completion|COPA, HellaSwag, Story Cloze|
|QA|PIQA, ARC-Challenge, OpenbookQA|
We also evaluate FLIPPED on a subset of [BIG-bench benchmark](https://github.com/google/BIG-bench):
- Code description task
- Conceptual combinations
- Hindu knowledge json
- Known unknowns
- Language identification
- Logic grid puzzle task
- Logical deduction
- Common misconceptions
- Movie dialog same or different
- Novel concepts
- Strategyqa
- Formal fallacies syllogisms negation
- VitaminC
- Winowhy multiple choice
# Label generalization
We evaluate the robustness of models on following datasets with changing the output label of the datasets. The substitute words can be found in our [paper](https://arxiv.org/abs/2210.02969).
|Task category|(Datasets, Template name)|
|-|-|
|Unseen tasks|(WSC, does the pronoun refer to), (CB, can we infer), (RTE, MNLI crowdsource)|
|Seen tasks|(IMDB, Reviewer Enjoyment Yes No), (PAWS, Meaning) |
The template name we used can be found in the [promptsource template library](https://github.com/bigscience-workshop/promptsource/tree/main/promptsource/templates).
# BibTeX entry and citation info
```bibtex
@article{ye2022guess,
title={Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners},
author={Ye, Seonghyeon and Kim, Doyoung and Jang, Joel and Shin, Joongbo and Seo, Minjoon},
journal={arXiv preprint arXiv:2210.02969},
year={2022}
}
```
|
40038816cad97d3801810e8f78f405f9
|
robkayinto/xlm-roberta-base-finetuned-panx-de
|
robkayinto
|
xlm-roberta
| 12 | 1 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1397
- F1: 0.8609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2593 | 1.0 | 525 | 0.1628 | 0.8254 |
| 0.1291 | 2.0 | 1050 | 0.1420 | 0.8450 |
| 0.0817 | 3.0 | 1575 | 0.1397 | 0.8609 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
438afdb581992f6317404ce0ac5cbdcc
|
jonatasgrosman/exp_w2v2t_es_unispeech-sat_s42
|
jonatasgrosman
|
unispeech-sat
| 10 | 340 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['es']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'es']
| false | true | true | 462 | false |
# exp_w2v2t_es_unispeech-sat_s42
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
2aea92017845a667a4ff308c33445914
|
AnnihilationOperator/ofa-huge-caption
|
AnnihilationOperator
|
ofa
| 6 | 0 |
transformers
| 0 | null | true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,728 | false |
# OFA-huge-caption
This is the **huge** version of OFA pretrained model finetuned on COCO captioning task, forked & converted from the [original fairseq version](https://ofa-beijing.oss-cn-beijing.aliyuncs.com/checkpoints/caption_huge_best.pt) and compressed into float16.
The conversion script is custom, but the procedure described [Issue #171](https://github.com/OFA-Sys/OFA/issues/171) should also apply (quantization is not performed, but that's trivial).
You will need a [OFA modified version of transformers](https://github.com/OFA-Sys/OFA/tree/feature/add_transformers) to use this model. No idea why it is still not in master. Tips: You can just copy-paste the `transformers` folder into your project and rename it, then monkey-patch the `transformers` module to point to your local copy to avoid having to install it.
## Original README below
## Introduction
This is the **huge** version of OFA pretrained model. OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image generation, visual grounding, image captioning, image classification, text generation, etc.) to a simple sequence-to-sequence learning framework.
The directory includes 4 files, namely `config.json` which consists of model configuration, `vocab.json` and `merge.txt` for our OFA tokenizer, and lastly `pytorch_model.bin` which consists of model weights. There is no need to worry about the mismatch between Fairseq and transformers, since we have addressed the issue yet.
## How to use
To use it in transformers, please refer to <https://github.com/OFA-Sys/OFA/tree/feature/add_transformers>. Install the transformers and download the models as shown below.
```bash
git clone --single-branch --branch feature/add_transformers https://github.com/OFA-Sys/OFA.git
pip install OFA/transformers/
git clone https://huggingface.co/OFA-Sys/OFA-huge
```
After, refer the path to OFA-huge to `ckpt_dir`, and prepare an image for the testing example below. Also, ensure that you have pillow and torchvision in your environment.
```python
>>> from PIL import Image
>>> from torchvision import transforms
>>> from transformers import OFATokenizer, OFAModel
>>> from generate import sequence_generator
>>> mean, std = [0.5, 0.5, 0.5], [0.5, 0.5, 0.5]
>>> resolution = 480
>>> patch_resize_transform = transforms.Compose([
lambda image: image.convert("RGB"),
transforms.Resize((resolution, resolution), interpolation=Image.BICUBIC),
transforms.ToTensor(),
transforms.Normalize(mean=mean, std=std)
])
>>> tokenizer = OFATokenizer.from_pretrained(ckpt_dir)
>>> txt = " what does the image describe?"
>>> inputs = tokenizer([txt], return_tensors="pt").input_ids
>>> img = Image.open(path_to_image)
>>> patch_img = patch_resize_transform(img).unsqueeze(0)
# using the generator of fairseq version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=True)
>>> generator = sequence_generator.SequenceGenerator(
tokenizer=tokenizer,
beam_size=5,
max_len_b=16,
min_len=0,
no_repeat_ngram_size=3,
)
>>> data = {}
>>> data["net_input"] = {"input_ids": inputs, 'patch_images': patch_img, 'patch_masks':torch.tensor([True])}
>>> gen_output = generator.generate([model], data)
>>> gen = [gen_output[i][0]["tokens"] for i in range(len(gen_output))]
# using the generator of huggingface version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=False)
>>> gen = model.generate(inputs, patch_images=patch_img, num_beams=5, no_repeat_ngram_size=3)
>>> print(tokenizer.batch_decode(gen, skip_special_tokens=True))
```
|
82a3a5d796f4566cc3498ee89933c14f
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.