modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
santhoshkolloju/ans_gen | 1fbf3881a12a7cc49819ce4e386d6f4458be263a | 2021-06-23T14:06:04.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | santhoshkolloju | null | santhoshkolloju/ans_gen | 4 | null | transformers | 18,900 | Entry not found |
saraks/cuad-distil-effective_date-08-29-v1 | c16eb4b7e56f52b325a52ec82d3ed97da7b46d2a | 2021-08-28T05:39:15.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saraks | null | saraks/cuad-distil-effective_date-08-29-v1 | 4 | null | transformers | 18,901 | Entry not found |
saraks/cuad-distil-governing_law-08-28-v1 | 66c3e4e06224f5ca4c66e8785ba89de54aef44ba | 2021-08-27T18:58:20.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saraks | null | saraks/cuad-distil-governing_law-08-28-v1 | 4 | null | transformers | 18,902 | Entry not found |
sarnikowski/electra-small-discriminator-da-256-cased | 290fcaff649803175cc4a02b295913820cde239a | 2020-12-11T22:01:11.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"da",
"arxiv:2003.10555",
"transformers",
"license:cc-by-4.0"
] | null | false | sarnikowski | null | sarnikowski/electra-small-discriminator-da-256-cased | 4 | null | transformers | 18,903 | ---
language: da
license: cc-by-4.0
---
# Danish ELECTRA small (cased)
An [ELECTRA](https://arxiv.org/abs/2003.10555) model pretrained on a custom Danish corpus (~17.5gb).
For details regarding data sources and training procedure, along with benchmarks on downstream tasks, go to: https://github.com/sarnikowski/danish_transformers/tree/main/electra
## Usage
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sarnikowski/electra-small-discriminator-da-256-cased")
model = AutoModel.from_pretrained("sarnikowski/electra-small-discriminator-da-256-cased")
```
## Questions?
If you have any questions feel free to open an issue on the [danish_transformers](https://github.com/sarnikowski/danish_transformers) repository, or send an email to [email protected]
|
sarraf/distilbert-base-uncased-finetuned-cola | 6f1e3e760948ce5b0b965af50a1ca0ff29e83ddd | 2022-01-20T20:04:18.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | sarraf | null | sarraf/distilbert-base-uncased-finetuned-cola | 4 | null | transformers | 18,904 | Entry not found |
sbiswal/OdiaBert | f6a27331664ff51f1ee3fe9624e4e315516df176 | 2021-05-20T05:05:41.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sbiswal | null | sbiswal/OdiaBert | 4 | null | transformers | 18,905 | Entry not found |
seduerr/pai_meaningfulness | f619616058be2ddd50d528e2892ceca739ef729a | 2021-06-23T14:15:30.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seduerr | null | seduerr/pai_meaningfulness | 4 | null | transformers | 18,906 | Entry not found |
seokho/gpt2-emotion | 52608b95baf439ba8b5278ea9c25fed7ff85acfb | 2021-07-06T06:07:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | seokho | null | seokho/gpt2-emotion | 4 | null | transformers | 18,907 | dataset: Emotion Detection from Text |
sergunow/rick-sanchez-blenderbot-400-distill | 57040b9c0816b9d5adb6ba5e73b8a9214cc3cf68 | 2021-06-17T22:33:55.000Z | [
"pytorch",
"blenderbot",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sergunow | null | sergunow/rick-sanchez-blenderbot-400-distill | 4 | null | transformers | 18,908 | Entry not found |
seyonec/ChemBERTA_PubChem1M_shard00_140k | 0f62cb969ef32c5d4a8915c6dfa2582f41681e21 | 2021-05-20T20:53:19.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/ChemBERTA_PubChem1M_shard00_140k | 4 | null | transformers | 18,909 | Entry not found |
seyonec/SMILES_BPE_PubChem_250k | 4bea109c2782909b30d148fb71c8eb5d9ed227da | 2021-05-20T21:06:00.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/SMILES_BPE_PubChem_250k | 4 | null | transformers | 18,910 | Entry not found |
seyonec/SmilesTokenizer_ChemBERTa_zinc250k_40k | a91bfd6c2d87e22755a45f77388ce0321af937b3 | 2021-05-20T21:11:20.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/SmilesTokenizer_ChemBERTa_zinc250k_40k | 4 | null | transformers | 18,911 | Entry not found |
sgugger/debug-example2 | d3d07ba8280bd9c120173f74ff54f7a42e6fb971 | 2022-01-27T13:46:00.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | sgugger | null | sgugger/debug-example2 | 4 | null | transformers | 18,912 | Entry not found |
sgugger/marian-finetuned-kde4-en-to-fr | a25f49ba1276b49ee045005cbb9ed312415c785b | 2021-09-28T13:47:35.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:kde4",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | sgugger | null | sgugger/marian-finetuned-kde4-en-to-fr | 4 | null | transformers | 18,913 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 53.2503
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8666
- Bleu: 53.2503
- Gen Len: 14.7005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
shahukareem/wav2vec2-xls-r-1b-dv-with-lm-v2 | 060a9f0c52ee8f72278480a72b461ecce4f8a416 | 2022-02-18T23:07:45.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | shahukareem | null | shahukareem/wav2vec2-xls-r-1b-dv-with-lm-v2 | 4 | null | transformers | 18,914 | Entry not found |
shauryr/checkpoint-475000 | 6d7e1fe6c96b72eee76716114a70368cfb35f353 | 2021-05-20T21:17:36.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | shauryr | null | shauryr/checkpoint-475000 | 4 | null | transformers | 18,915 | Entry not found |
shivam/wav2vec2-xls-r-300m-hindi | 8c286c16ef1f835d996c33f3d9e8b8a08dd3cf51 | 2022-01-23T16:37:08.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | shivam | null | shivam/wav2vec2-xls-r-300m-hindi | 4 | 1 | transformers | 18,916 | ---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4031
- Wer: 0.6827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.3156 | 3.4 | 500 | 4.5583 | 1.0 |
| 3.3329 | 6.8 | 1000 | 3.4274 | 1.0001 |
| 2.1275 | 10.2 | 1500 | 1.7221 | 0.8763 |
| 1.5737 | 13.6 | 2000 | 1.4188 | 0.8143 |
| 1.3835 | 17.01 | 2500 | 1.2251 | 0.7447 |
| 1.3247 | 20.41 | 3000 | 1.2827 | 0.7394 |
| 1.231 | 23.81 | 3500 | 1.2216 | 0.7074 |
| 1.1819 | 27.21 | 4000 | 1.2210 | 0.6863 |
| 1.1546 | 30.61 | 4500 | 1.3233 | 0.7308 |
| 1.0902 | 34.01 | 5000 | 1.3251 | 0.7010 |
| 1.0749 | 37.41 | 5500 | 1.3274 | 0.7235 |
| 1.0412 | 40.81 | 6000 | 1.2942 | 0.6856 |
| 1.0064 | 44.22 | 6500 | 1.2581 | 0.6732 |
| 1.0006 | 47.62 | 7000 | 1.2767 | 0.6885 |
| 0.9518 | 51.02 | 7500 | 1.2966 | 0.6925 |
| 0.9514 | 54.42 | 8000 | 1.2981 | 0.7067 |
| 0.9241 | 57.82 | 8500 | 1.3835 | 0.7124 |
| 0.9059 | 61.22 | 9000 | 1.3318 | 0.7083 |
| 0.8906 | 64.62 | 9500 | 1.3640 | 0.6962 |
| 0.8468 | 68.03 | 10000 | 1.4727 | 0.6982 |
| 0.8631 | 71.43 | 10500 | 1.3401 | 0.6809 |
| 0.8154 | 74.83 | 11000 | 1.4124 | 0.6955 |
| 0.7953 | 78.23 | 11500 | 1.4245 | 0.6950 |
| 0.818 | 81.63 | 12000 | 1.3944 | 0.6995 |
| 0.7772 | 85.03 | 12500 | 1.3735 | 0.6785 |
| 0.7857 | 88.43 | 13000 | 1.3696 | 0.6808 |
| 0.7705 | 91.84 | 13500 | 1.4101 | 0.6870 |
| 0.7537 | 95.24 | 14000 | 1.4178 | 0.6832 |
| 0.7734 | 98.64 | 14500 | 1.4027 | 0.6831 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
|
shivam/xls-r-300m-marathi | 0f36c3c0f95447e201c33748025bf7f0a617180a | 2022-03-23T18:29:32.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mr",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | shivam | null | shivam/xls-r-300m-marathi | 4 | null | transformers | 18,917 | ---
language:
- mr
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- mr
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: ''
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice Corpus 8.0
type: mozilla-foundation/common_voice_8_0
args: mr
metrics:
- name: Test WER
type: wer
value: 38.27
- name: Test CER
type: cer
value: 8.91
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset.
It achieves the following results on the mozilla-foundation/common_voice_8_0 mr test set:
- Without LM
+ WER: 48.53
+ CER: 10.63
- With LM
+ WER: 38.27
+ CER: 8.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 400.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 4.2706 | 22.73 | 500 | 4.0174 | 1.0 |
| 3.2492 | 45.45 | 1000 | 3.2309 | 0.9908 |
| 1.9709 | 68.18 | 1500 | 1.0651 | 0.8440 |
| 1.4088 | 90.91 | 2000 | 0.5765 | 0.6550 |
| 1.1326 | 113.64 | 2500 | 0.4842 | 0.5760 |
| 0.9709 | 136.36 | 3000 | 0.4785 | 0.6013 |
| 0.8433 | 159.09 | 3500 | 0.5048 | 0.5419 |
| 0.7404 | 181.82 | 4000 | 0.5052 | 0.5339 |
| 0.6589 | 204.55 | 4500 | 0.5237 | 0.5897 |
| 0.5831 | 227.27 | 5000 | 0.5166 | 0.5447 |
| 0.5375 | 250.0 | 5500 | 0.5292 | 0.5487 |
| 0.4784 | 272.73 | 6000 | 0.5480 | 0.5596 |
| 0.4421 | 295.45 | 6500 | 0.5682 | 0.5467 |
| 0.4047 | 318.18 | 7000 | 0.5681 | 0.5447 |
| 0.3779 | 340.91 | 7500 | 0.5783 | 0.5347 |
| 0.3525 | 363.64 | 8000 | 0.5856 | 0.5367 |
| 0.3393 | 386.36 | 8500 | 0.5960 | 0.5359 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
|
shiyue/roberta-large-realsumm-by-systems-fold3 | 9734c877e76ad9d0bc296f766bb32b1e6bb50055 | 2021-09-23T19:41:28.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | shiyue | null | shiyue/roberta-large-realsumm-by-systems-fold3 | 4 | null | transformers | 18,918 | Entry not found |
shiyue/roberta-large-realsumm-by-systems-fold4 | 2fa00737ac8658e0c44243a6065790e2c0cfab5f | 2021-09-23T19:44:16.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | shiyue | null | shiyue/roberta-large-realsumm-by-systems-fold4 | 4 | null | transformers | 18,919 | Entry not found |
shokiokita/distilbert-base-uncased-finetuned-mrpc | 66329831f7fc6045927b2b58592fcf4b58c03ff4 | 2021-10-12T05:56:42.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | shokiokita | null | shokiokita/distilbert-base-uncased-finetuned-mrpc | 4 | null | transformers | 18,920 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.7328431372549019
- name: F1
type: f1
value: 0.8310077519379845
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mrpc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5579
- Accuracy: 0.7328
- F1: 0.8310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 23 | 0.5797 | 0.7010 | 0.8195 |
| No log | 2.0 | 46 | 0.5647 | 0.7083 | 0.8242 |
| No log | 3.0 | 69 | 0.5677 | 0.7181 | 0.8276 |
| No log | 4.0 | 92 | 0.5495 | 0.7328 | 0.8300 |
| No log | 5.0 | 115 | 0.5579 | 0.7328 | 0.8310 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
shoubhik/wav2vec2-xls-r-300m-hindi_v3 | e62e3237b8458531bb375c8b7902da2e0e2bbff2 | 2022-02-07T10:09:56.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | shoubhik | null | shoubhik/wav2vec2-xls-r-300m-hindi_v3 | 4 | null | transformers | 18,921 | Entry not found |
simonmun/COHA1910s | 7e6a339b922b12532e813d332ad6383ef7be1329 | 2021-05-20T21:40:47.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonmun | null | simonmun/COHA1910s | 4 | null | transformers | 18,922 | Entry not found |
simonmun/COHA1970s | 8c87edb1809922de6abf5d5c8cbe848743aeda63 | 2021-05-20T21:47:23.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonmun | null | simonmun/COHA1970s | 4 | null | transformers | 18,923 | Entry not found |
simonmun/COHA2000s | 0693e79611506c5e5bf23ab4bb1d509417e560c8 | 2021-05-20T21:49:54.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonmun | null | simonmun/COHA2000s | 4 | null | transformers | 18,924 | Entry not found |
simonmun/Ey_SentenceClassification | 751acf89341e493b065abbbd83b559174c746c20 | 2021-05-20T05:56:16.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | simonmun | null | simonmun/Ey_SentenceClassification | 4 | null | transformers | 18,925 | Entry not found |
sismetanin/rubert-ru-sentiment-krnd | 2af64002e5d966538eaf3049bfc979c425197408 | 2021-05-20T06:06:54.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | sismetanin | null | sismetanin/rubert-ru-sentiment-krnd | 4 | null | transformers | 18,926 | Entry not found |
sismetanin/rubert-ru-sentiment-rutweetcorp | 164dcc73f41d6f9a21e52eedc706ca450c473340 | 2021-05-20T06:12:50.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | sismetanin | null | sismetanin/rubert-ru-sentiment-rutweetcorp | 4 | null | transformers | 18,927 | Entry not found |
sismetanin/rubert_conversational-ru-sentiment-rusentiment | 531a332f232ac77fefb78e4834b8f6276d535cb2 | 2021-05-20T06:22:35.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"ru",
"transformers",
"sentiment analysis",
"Russian"
] | text-classification | false | sismetanin | null | sismetanin/rubert_conversational-ru-sentiment-rusentiment | 4 | null | transformers | 18,928 | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## RuBERT-Conversational-ru-sentiment-RuSentiment
RuBERT-Conversational-ru-sentiment-RuSentiment is a [RuBERT-Conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model fine-tuned on [RuSentiment dataset](https://github.com/text-machine-lab/rusentiment) of general-domain Russian-language posts from the largest Russian social network, VKontakte.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@inproceedings{rogers2018rusentiment,
title={RuSentiment: An enriched sentiment analysis dataset for social media in Russian},
author={Rogers, Anna and Romanov, Alexey and Rumshisky, Anna and Volkova, Svitlana and Gronas, Mikhail and Gribov, Alex},
booktitle={Proceedings of the 27th international conference on computational linguistics},
pages={755--763},
year={2018}
}
``` |
sismetanin/rubert_conversational-ru-sentiment-rutweetcorp | 94ac4947eb3d456170c012f290279b348ccfcfb3 | 2021-05-20T06:24:01.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | sismetanin | null | sismetanin/rubert_conversational-ru-sentiment-rutweetcorp | 4 | null | transformers | 18,929 | Entry not found |
sismetanin/xlm_roberta_large-ru-sentiment-sentirueval2016 | 30733a0704e7fc7caf0c43188f08293a75fc97dc | 2021-02-25T02:52:29.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | sismetanin | null | sismetanin/xlm_roberta_large-ru-sentiment-sentirueval2016 | 4 | null | transformers | 18,930 | Entry not found |
snoop2head/KoGPT-Joong-2 | becc490d8dd962199497ca7bf49d170ac9188355 | 2021-11-18T06:23:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | snoop2head | null | snoop2head/KoGPT-Joong-2 | 4 | null | transformers | 18,931 | # KoGPT-Joong-2
[Github Source](https://github.com/snoop2head/KoGPT-Joong-2)
### KoGPT-Joong-2's Acrostic Poem Generation Examples (N행시 예시)
```
[연세대(1)]
연민이라는 것은 양날의 검과 같다
세기의 악연일수도..
대가는 혹독할것이다 연기의 끝은 상처다
[연세대(2)]
연약한 마음으로 강한 척하지 말고 강한 마음을 먹자
세 마디 말보다 한마디 말이 더 진정성 있어 보인다.
대시 하지 마라.
```
```
[자탄풍]
자그마하게
탄식의 강을 건너고
풍경의 나무를 넘어가네
```
### KoGPT-Joong-2's Phrase Generation Examples
```
[너는 나의]
- 너는 나의 거짓말. 나는 너의 참말. 너를 잊었다는 나와 나를 잊었다는 너의 차이.
- 너는 나의 옷자락이고 머릿결이고 꿈결이고 나를 헤집던 사정없는 풍속이었다
```
```
[그대 왜 내 꿈에]
- 그대 왜 내 꿈에 나오지 않는 걸까요, 내 꿈 속에서도 그대 사라지면 어쩌나요
- 그대 왜 내 꿈에 불시착했는가.
```
### Dataset finetuned on
- [가사 데이터셋](_clones/char-rnn-tensorflow/data/lyricskor/input.txt)
- [글스타그램 데이터셋](https://github.com/Keracorn/geulstagram)
### Dependencies Installation
```bash
pip install -r requirements.txt
```
### References
- [KoGPT2-Transformers huggingface 활용 예시](https://github.com/taeminlee/KoGPT2-Transformers)
- [SKT-AI의 KoGPT2와 pytorch를 이용해 소설을 생성하는 GPT-2 모델.](https://github.com/shbictai/narrativeKoGPT2)
- [인공지능 수필 작가 블로그 글](https://jeinalog.tistory.com/entry/AI-x-Bookathon-%EC%9D%B8%EA%B3%B5%EC%A7%80%EB%8A%A5%EC%9D%84-%EC%88%98%ED%95%84-%EC%9E%91%EA%B0%80%EB%A1%9C-%ED%95%99%EC%8A%B5%EC%8B%9C%EC%BC%9C%EB%B3%B4%EC%9E%90) | [인공지능 수필 작가 코드](https://github.dev/jeina7/GPT2-essay-writer)
|
socialmediaie/TRAC2020_ALL_C_bert-base-multilingual-uncased | b6ac93a12b7e0af9e448b3509b2db2e80b8133fe | 2021-05-20T06:54:45.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | socialmediaie | null | socialmediaie/TRAC2020_ALL_C_bert-base-multilingual-uncased | 4 | null | transformers | 18,932 | # Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020
Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying
Our approach is described in our paper titled:
> Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020
NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper.
If you plan to use the dataset please cite the following resources:
* Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
* Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1.
```
@inproceedings{Mishra2020TRAC,
author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)},
title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
year = {2020}
}
@data{illinoisdatabankIDB-8882752,
author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
doi = {10.13012/B2IDB-8882752_V1},
publisher = {University of Illinois at Urbana-Champaign},
title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1},
year = {2020}
}
```
## Usage
The models can be used via the following code:
```python
from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification
import torch
from pathlib import Path
from scipy.special import softmax
import numpy as np
import pandas as pd
TASK_LABEL_IDS = {
"Sub-task A": ["OAG", "NAG", "CAG"],
"Sub-task B": ["GEN", "NGEN"],
"Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"]
}
model_version="databank" # other option is hugging face library
if model_version == "databank":
# Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752
# Unzip the file at some model_path (we are using: "databank_model")
model_path = next(Path("databank_model").glob("./*/output/*/model"))
# Assuming you get the following type of structure inside "databank_model"
# 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model'
lang, task, _, base_model, _ = model_path.parts
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
else:
lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased"
base_model = f"socialmediaie/{lang}_{lang.split()[-1]}_{base_model}"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(base_model)
# For doing inference set model in eval mode
model.eval()
task_labels = TASK_LABEL_IDS[task]
sentence = "This is a good cat and this is a bad dog."
processed_sentence = f"{tokenizer.cls_token} {sentence}"
tokens = tokenizer.tokenize(sentence)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokens)
tokens_tensor = torch.tensor([indexed_tokens])
with torch.no_grad():
logits, = model(tokens_tensor, labels=None)
logits
preds = logits.detach().cpu().numpy()
preds_probs = softmax(preds, axis=1)
preds = np.argmax(preds_probs, axis=1)
preds_labels = np.array(task_labels)[preds]
print(dict(zip(task_labels, preds_probs[0])), preds_labels)
"""You should get an output as follows:
({'CAG-GEN': 0.06762535,
'CAG-NGEN': 0.03244293,
'NAG-GEN': 0.6897794,
'NAG-NGEN': 0.15498641,
'OAG-GEN': 0.034373745,
'OAG-NGEN': 0.020792078},
array(['NAG-GEN'], dtype='<U8'))
"""
``` |
socialmediaie/TRAC2020_ENG_C_bert-base-uncased | 80987a558132a288b0f8f8d6aded201563de12eb | 2021-05-20T06:57:39.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | socialmediaie | null | socialmediaie/TRAC2020_ENG_C_bert-base-uncased | 4 | null | transformers | 18,933 | # Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020
Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying.
Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752#
We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice.
Our approach is described in our paper titled:
> Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020
NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper.
If you plan to use the dataset please cite the following resources:
* Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
* Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1.
```
@inproceedings{Mishra2020TRAC,
author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)},
title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
year = {2020}
}
@data{illinoisdatabankIDB-8882752,
author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
doi = {10.13012/B2IDB-8882752_V1},
publisher = {University of Illinois at Urbana-Champaign},
title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1},
year = {2020}
}
```
## Usage
The models can be used via the following code:
```python
from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification
import torch
from pathlib import Path
from scipy.special import softmax
import numpy as np
import pandas as pd
TASK_LABEL_IDS = {
"Sub-task A": ["OAG", "NAG", "CAG"],
"Sub-task B": ["GEN", "NGEN"],
"Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"]
}
model_version="databank" # other option is hugging face library
if model_version == "databank":
# Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752
# Unzip the file at some model_path (we are using: "databank_model")
model_path = next(Path("databank_model").glob("./*/output/*/model"))
# Assuming you get the following type of structure inside "databank_model"
# 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model'
lang, task, _, base_model, _ = model_path.parts
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
else:
lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased"
base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(base_model)
# For doing inference set model in eval mode
model.eval()
# If you want to further fine-tune the model you can reset the model to model.train()
task_labels = TASK_LABEL_IDS[task]
sentence = "This is a good cat and this is a bad dog."
processed_sentence = f"{tokenizer.cls_token} {sentence}"
tokens = tokenizer.tokenize(sentence)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokens)
tokens_tensor = torch.tensor([indexed_tokens])
with torch.no_grad():
logits, = model(tokens_tensor, labels=None)
logits
preds = logits.detach().cpu().numpy()
preds_probs = softmax(preds, axis=1)
preds = np.argmax(preds_probs, axis=1)
preds_labels = np.array(task_labels)[preds]
print(dict(zip(task_labels, preds_probs[0])), preds_labels)
"""You should get an output as follows:
({'CAG-GEN': 0.06762535,
'CAG-NGEN': 0.03244293,
'NAG-GEN': 0.6897794,
'NAG-NGEN': 0.15498641,
'OAG-GEN': 0.034373745,
'OAG-NGEN': 0.020792078},
array(['NAG-GEN'], dtype='<U8'))
"""
``` |
socialmediaie/TRAC2020_IBEN_C_bert-base-multilingual-uncased | 40a2289958672bbdb1f90e3956b5ff20a74a916d | 2021-05-20T07:06:16.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | socialmediaie | null | socialmediaie/TRAC2020_IBEN_C_bert-base-multilingual-uncased | 4 | null | transformers | 18,934 | # Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020
Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying.
Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752#
We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice.
Our approach is described in our paper titled:
> Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020
NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper.
If you plan to use the dataset please cite the following resources:
* Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
* Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1.
```
@inproceedings{Mishra2020TRAC,
author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)},
title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
year = {2020}
}
@data{illinoisdatabankIDB-8882752,
author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
doi = {10.13012/B2IDB-8882752_V1},
publisher = {University of Illinois at Urbana-Champaign},
title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1},
year = {2020}
}
```
## Usage
The models can be used via the following code:
```python
from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification
import torch
from pathlib import Path
from scipy.special import softmax
import numpy as np
import pandas as pd
TASK_LABEL_IDS = {
"Sub-task A": ["OAG", "NAG", "CAG"],
"Sub-task B": ["GEN", "NGEN"],
"Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"]
}
model_version="databank" # other option is hugging face library
if model_version == "databank":
# Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752
# Unzip the file at some model_path (we are using: "databank_model")
model_path = next(Path("databank_model").glob("./*/output/*/model"))
# Assuming you get the following type of structure inside "databank_model"
# 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model'
lang, task, _, base_model, _ = model_path.parts
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
else:
lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased"
base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(base_model)
# For doing inference set model in eval mode
model.eval()
# If you want to further fine-tune the model you can reset the model to model.train()
task_labels = TASK_LABEL_IDS[task]
sentence = "This is a good cat and this is a bad dog."
processed_sentence = f"{tokenizer.cls_token} {sentence}"
tokens = tokenizer.tokenize(sentence)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokens)
tokens_tensor = torch.tensor([indexed_tokens])
with torch.no_grad():
logits, = model(tokens_tensor, labels=None)
logits
preds = logits.detach().cpu().numpy()
preds_probs = softmax(preds, axis=1)
preds = np.argmax(preds_probs, axis=1)
preds_labels = np.array(task_labels)[preds]
print(dict(zip(task_labels, preds_probs[0])), preds_labels)
"""You should get an output as follows:
({'CAG-GEN': 0.06762535,
'CAG-NGEN': 0.03244293,
'NAG-GEN': 0.6897794,
'NAG-NGEN': 0.15498641,
'OAG-GEN': 0.034373745,
'OAG-NGEN': 0.020792078},
array(['NAG-GEN'], dtype='<U8'))
"""
``` |
soham950/timelines_classifier | 34be02cd0c945d4f35a73198bd32464733a0d8cc | 2021-05-20T07:07:42.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | soham950 | null | soham950/timelines_classifier | 4 | null | transformers | 18,935 | Entry not found |
speech-seq2seq/wav2vec2-2-bert-large-no-adapter-frozen-enc | 0f63a2fdc492a0c02fbe6611c9df932c0a2106cc | 2022-02-15T00:30:50.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | speech-seq2seq | null | speech-seq2seq/wav2vec2-2-bert-large-no-adapter-frozen-enc | 4 | null | transformers | 18,936 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7664
- Wer: 2.0133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.171 | 0.28 | 500 | 8.6956 | 2.0055 |
| 5.307 | 0.56 | 1000 | 8.5958 | 2.0096 |
| 5.1449 | 0.84 | 1500 | 10.4208 | 2.0115 |
| 6.1351 | 1.12 | 2000 | 10.2950 | 2.0059 |
| 6.2997 | 1.4 | 2500 | 10.6762 | 2.0115 |
| 6.1394 | 1.68 | 3000 | 10.9190 | 2.0110 |
| 6.1868 | 1.96 | 3500 | 11.0166 | 2.0112 |
| 5.9647 | 2.24 | 4000 | 11.4154 | 2.0141 |
| 6.2202 | 2.52 | 4500 | 11.5837 | 2.0152 |
| 5.9612 | 2.8 | 5000 | 11.7664 | 2.0133 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
speech-seq2seq/wav2vec2-2-roberta-large-no-adapter-frozen-enc | d482347985acc2f406f3f5d90e75221706c230be | 2022-02-17T03:21:25.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | speech-seq2seq | null | speech-seq2seq/wav2vec2-2-roberta-large-no-adapter-frozen-enc | 4 | null | transformers | 18,937 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 20.5959
- Wer: 1.0008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.4796 | 0.28 | 500 | 10.7690 | 1.0 |
| 6.2294 | 0.56 | 1000 | 10.5096 | 1.0 |
| 5.7859 | 0.84 | 1500 | 13.7547 | 1.0017 |
| 6.0219 | 1.12 | 2000 | 15.4966 | 1.0007 |
| 5.9142 | 1.4 | 2500 | 18.5919 | 1.0 |
| 5.6761 | 1.68 | 3000 | 16.9601 | 1.0 |
| 5.73 | 1.96 | 3500 | 18.9857 | 1.0004 |
| 4.9793 | 2.24 | 4000 | 18.3202 | 1.0007 |
| 5.2332 | 2.52 | 4500 | 19.5416 | 1.0008 |
| 4.9792 | 2.8 | 5000 | 20.5959 | 1.0008 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sripadh8/Distil_albert_student_albert | 536db2b4b429d14c3a93315f43b859ce0d2520d1 | 2021-05-21T16:12:48.000Z | [
"pytorch",
"albert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sripadh8 | null | sripadh8/Distil_albert_student_albert | 4 | null | transformers | 18,938 | Entry not found |
srosy/distilbert-base-uncased-finetuned-ner | 15200a5561ef1e7fb445b9f97767d874f6f4643a | 2021-07-11T15:29:20.000Z | [
"pytorch",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | srosy | null | srosy/distilbert-base-uncased-finetuned-ner | 4 | null | transformers | 18,939 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.9844313470062116
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0590
- Precision: 0.9266
- Recall: 0.9381
- F1: 0.9323
- Accuracy: 0.9844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0616 | 1.0 | 878 | 0.0604 | 0.9195 | 0.9370 | 0.9282 | 0.9833 |
| 0.0328 | 2.0 | 1756 | 0.0588 | 0.9258 | 0.9375 | 0.9316 | 0.9841 |
| 0.0246 | 3.0 | 2634 | 0.0590 | 0.9266 | 0.9381 | 0.9323 | 0.9844 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1
- Datasets 1.9.0
- Tokenizers 0.10.3
|
sshleifer/student-pegasus-xsum-12-12 | 9602995a7ef99c120d357cf735ce3129dce420d8 | 2020-09-11T04:01:55.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student-pegasus-xsum-12-12 | 4 | null | transformers | 18,940 | Entry not found |
sshleifer/student_cnn_12_9 | 922c73a2cf16e10a28864bce39f618c25ffd8df2 | 2021-06-14T08:39:43.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_cnn_12_9 | 4 | null | transformers | 18,941 | Entry not found |
sshleifer/student_cnn_9_12 | 29288e6c064e3d6dc3cc1c94c31f3853639b38e8 | 2021-06-14T09:22:48.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_cnn_9_12 | 4 | null | transformers | 18,942 | Entry not found |
st1992/paraphrase-MiniLM-L12-tagalog-v2 | cecad20774abd5349e060250c1244b339d0a5f0d | 2022-01-24T05:48:32.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | st1992 | null | st1992/paraphrase-MiniLM-L12-tagalog-v2 | 4 | null | transformers | 18,943 |
# st1992/paraphrase-MiniLM-L12-tagalog-v2
paraphrase-MiniLM-L12-v2 finetuned on Tagalog language: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers) : same as other sentence-transformer models
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('st1992/paraphrase-MiniLM-L12-tagalog-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['hindi po', 'tulog na']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('st1992/paraphrase-MiniLM-L12-tagalog-v2')
model = AutoModel.from_pretrained('st1992/paraphrase-MiniLM-L12-tagalog-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
``` |
stefan-it/bort-full | 6f8071b5417f1027757964f28263225f2c422b05 | 2020-12-16T13:06:42.000Z | [
"pytorch",
"bort",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | stefan-it | null | stefan-it/bort-full | 4 | null | transformers | 18,944 | Entry not found |
stevhliu/t5-small-finetuned-billsum-ca_test | 98bda004599e71abcd5cb70098a09562be6ea04c | 2022-06-29T20:05:37.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:billsum",
"transformers",
"summarization",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | stevhliu | null | stevhliu/t5-small-finetuned-billsum-ca_test | 4 | null | transformers | 18,945 | ---
license: apache-2.0
datasets:
- billsum
tags:
- summarization
- t5
widget:
- text: "The people of the State of California do enact as follows: SECTION 1. The\
\ Legislature hereby finds and declares as follows: (a) Many areas of the state\
\ are disproportionately impacted by drought because they are heavily dependent\
\ or completely reliant on groundwater from basins that are in overdraft and in\
\ which the water table declines year after year or from basins that are contaminated.\
\ (b) There are a number of state grant and loan programs that provide financial\
\ assistance to communities to address drinking water and wastewater needs. Unfortunately,\
\ there is no program in place to provide similar assistance to individual homeowners\
\ who are reliant on their own groundwater wells and who may not be able to afford\
\ conventional private loans to undertake vital water supply, water quality, and\
\ wastewater improvements. (c) The program created by this act is intended to\
\ bridge that gap by providing low-interest loans, grants, or both, to individual\
\ homeowners to undertake actions necessary to provide safer, cleaner, and more\
\ reliable drinking water and wastewater treatment. These actions may include,\
\ but are not limited to, digging deeper wells, improving existing wells and related\
\ equipment, addressing drinking water contaminants in the homeowner\u2019s water,\
\ or connecting to a local water or wastewater system. SEC. 2. Chapter 6.6 (commencing\
\ with Section 13486) is added to Division 7 of the Water Code, to read: CHAPTER\
\ 6.6. Water and Wastewater Loan and Grant Program 13486. (a) The board shall\
\ establish a program in accordance with this chapter to provide low-interest\
\ loans and grants to local agencies for low-interest loans and grants to eligible\
\ applicants for any of the following purposes:"
example_title: Water use
- text: "The people of the State of California do enact as follows: SECTION 1. Section\
\ 2196 of the Elections Code is amended to read: 2196. (a) (1) Notwithstanding\
\ any other provision of law, a person who is qualified to register to vote and\
\ who has a valid California driver\u2019s license or state identification card\
\ may submit an affidavit of voter registration electronically on the Internet\
\ Web site of the Secretary of State. (2) An affidavit submitted pursuant to this\
\ section is effective upon receipt of the affidavit by the Secretary of State\
\ if the affidavit is received on or before the last day to register for an election\
\ to be held in the precinct of the person submitting the affidavit. (3) The affiant\
\ shall affirmatively attest to the truth of the information provided in the affidavit.\
\ (4) For voter registration purposes, the applicant shall affirmatively assent\
\ to the use of his or her signature from his or her driver\u2019s license or\
\ state identification card. (5) For each electronic affidavit, the Secretary\
\ of State shall obtain an electronic copy of the applicant\u2019s signature from\
\ his or her driver\u2019s license or state identification card directly from\
\ the Department of Motor Vehicles. (6) The Secretary of State shall require a\
\ person who submits an affidavit pursuant to this section to submit all of the\
\ following: (A) The number from his or her California driver\u2019s license or\
\ state identification card. (B) His or her date of birth. (C) The last four digits\
\ of his or her social security number. (D) Any other information the Secretary\
\ of State deems necessary to establish the identity of the affiant. (7) Upon\
\ submission of an affidavit pursuant to this section, the electronic voter registration\
\ system shall provide for immediate verification of both of the following:"
example_title: Election
metrics:
- rouge
model-index:
- name: t5-small-finetuned-billsum-ca_test
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 12.6315
- task:
type: summarization
name: Summarization
dataset:
name: billsum
type: billsum
config: default
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 12.1368
verified: true
- name: ROUGE-2
type: rouge
value: 4.6017
verified: true
- name: ROUGE-L
type: rouge
value: 10.0767
verified: true
- name: ROUGE-LSUM
type: rouge
value: 10.6892
verified: true
- name: loss
type: loss
value: 2.897707462310791
verified: true
- name: gen_len
type: gen_len
value: 19.0
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-billsum-ca_test
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3376
- Rouge1: 12.6315
- Rouge2: 6.9839
- Rougel: 10.9983
- Rougelsum: 11.9383
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 495 | 2.4805 | 9.9389 | 4.1239 | 8.3979 | 9.1599 | 19.0 |
| 3.1564 | 2.0 | 990 | 2.3833 | 12.1026 | 6.5196 | 10.5123 | 11.4527 | 19.0 |
| 2.66 | 3.0 | 1485 | 2.3496 | 12.5389 | 6.8686 | 10.8798 | 11.8636 | 19.0 |
| 2.5671 | 4.0 | 1980 | 2.3376 | 12.6315 | 6.9839 | 10.9983 | 11.9383 | 19.0 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
sunguk/sunguk-bert | 34162f671ea21f20203c711c64af16af99abda9b | 2021-03-19T08:37:20.000Z | [
"pytorch",
"transformers"
] | null | false | sunguk | null | sunguk/sunguk-bert | 4 | null | transformers | 18,946 | Entry not found |
sunitha/roberta-customds-finetune | b4dade54543a92360b82a3998637b1154a908728 | 2022-02-10T09:37:34.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | sunitha | null | sunitha/roberta-customds-finetune | 4 | null | transformers | 18,947 | Entry not found |
tals/albert-base-mnli | 8b229c50236cbb6edd4655f456b0b54a8ce841e4 | 2022-06-24T01:35:18.000Z | [
"pytorch",
"albert",
"text-classification",
"python",
"dataset:fever",
"dataset:glue",
"dataset:multi_nli",
"dataset:tals/vitaminc",
"transformers"
] | text-classification | false | tals | null | tals/albert-base-mnli | 4 | null | transformers | 18,948 | ---
language: python
datasets:
- fever
- glue
- multi_nli
- tals/vitaminc
---
# Details
Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`).
For more details see: https://github.com/TalSchuster/VitaminC
When using this model, please cite the paper.
# BibTeX entry and citation info
```bibtex
@inproceedings{schuster-etal-2021-get,
title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence",
author = "Schuster, Tal and
Fisch, Adam and
Barzilay, Regina",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.52",
doi = "10.18653/v1/2021.naacl-main.52",
pages = "624--643",
abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",
}
```
|
tanay/xlm-fine-tuned | 3bc9684201ac82a3064f3d958ec641117cb65cd8 | 2021-03-22T05:13:25.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | tanay | null | tanay/xlm-fine-tuned | 4 | null | transformers | 18,949 | Entry not found |
taoroalin/classifier_12aug_50k_labels | 8cf662004cb833e637a1bfd323c5bb3eaeba34a2 | 2021-09-21T02:29:15.000Z | [
"pytorch",
"deberta",
"text-classification",
"transformers"
] | text-classification | false | taoroalin | null | taoroalin/classifier_12aug_50k_labels | 4 | null | transformers | 18,950 | Entry not found |
tareknaous/bart-empathetic-dialogues | fde7cae7a997a281bc075b26366f3469bf4bdf07 | 2022-02-21T08:53:08.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tareknaous | null | tareknaous/bart-empathetic-dialogues | 4 | null | transformers | 18,951 | Entry not found |
tareknaous/roberta2gpt2-daily-dialog | 157af777abd7231db89733d2dfdf0a7415dfe8e7 | 2022-02-21T08:48:35.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tareknaous | null | tareknaous/roberta2gpt2-daily-dialog | 4 | null | transformers | 18,952 | Entry not found |
tareknaous/t5-daily-dialog-vM | d1489f003bb8e0729e2621c88fbe157eae0d81b9 | 2022-02-21T16:27:49.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tareknaous | null | tareknaous/t5-daily-dialog-vM | 4 | null | transformers | 18,953 | Entry not found |
textattack/albert-base-v2-rotten_tomatoes | 56ff2358f60e32880c477d45cfd2253117293088 | 2020-06-25T20:00:46.000Z | [
"pytorch",
"tensorboard",
"albert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | textattack | null | textattack/albert-base-v2-rotten_tomatoes | 4 | null | transformers | 18,954 | ## albert-base-v2 fine-tuned with TextAttack on the rotten_tomatoes dataset
This `albert-base-v2` model was fine-tuned for sequence classificationusing TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 10 epochs with a batch size of 128, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8855534709193246, as measured by the
eval set accuracy, found after 1 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/xlnet-base-cased-RTE | ebc01c0851efcec15a0caeadb4f58db4a81a91da | 2020-07-06T16:32:05.000Z | [
"pytorch",
"xlnet",
"text-generation",
"transformers"
] | text-generation | false | textattack | null | textattack/xlnet-base-cased-RTE | 4 | null | transformers | 18,955 | ## TextAttack Model Card
This `xlnet-base-cased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.7111913357400722, as measured by the
eval set accuracy, found after 3 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
thomaszz/distilbert-base-uncased-finetuned-ner | 0acf4190eb596bc4a54d215108527d3114477b72 | 2021-10-29T09:51:09.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | thomaszz | null | thomaszz/distilbert-base-uncased-finetuned-ner | 4 | null | transformers | 18,956 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9244616234124793
- name: Recall
type: recall
value: 0.9364582168027744
- name: F1
type: f1
value: 0.9304212515282871
- name: Accuracy
type: accuracy
value: 0.9833987322668276
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0623
- Precision: 0.9245
- Recall: 0.9365
- F1: 0.9304
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2377 | 1.0 | 878 | 0.0711 | 0.9176 | 0.9254 | 0.9215 | 0.9813 |
| 0.0514 | 2.0 | 1756 | 0.0637 | 0.9213 | 0.9346 | 0.9279 | 0.9831 |
| 0.031 | 3.0 | 2634 | 0.0623 | 0.9245 | 0.9365 | 0.9304 | 0.9834 |
### Framework versions
- Transformers 4.12.0
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
thomwolf/vqgan_imagenet_f16_1024 | b0c07d95af30b5ba5857d43711d43bf42f5e89b4 | 2021-06-08T21:16:25.000Z | [
"pytorch",
"vqgan_model",
"transformers"
] | null | false | thomwolf | null | thomwolf/vqgan_imagenet_f16_1024 | 4 | null | transformers | 18,957 | Entry not found |
thyagosme/wav2vec2-base-demo-colab | bcdf199eacf67716be9af4c3faa0b59fe6c3cfb7 | 2022-02-13T02:14:29.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | thyagosme | null | thyagosme/wav2vec2-base-demo-colab | 4 | null | transformers | 18,958 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4657
- Wer: 0.3422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4477 | 4.0 | 500 | 1.3352 | 0.9039 |
| 0.5972 | 8.0 | 1000 | 0.4752 | 0.4509 |
| 0.2224 | 12.0 | 1500 | 0.4604 | 0.4052 |
| 0.1308 | 16.0 | 2000 | 0.4542 | 0.3866 |
| 0.0889 | 20.0 | 2500 | 0.4730 | 0.3589 |
| 0.0628 | 24.0 | 3000 | 0.4984 | 0.3657 |
| 0.0479 | 28.0 | 3500 | 0.4657 | 0.3422 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
tobiaslee/bert-2l-768h-uncased | 82f115e1381b8858b93ba2af3819f99545f79092 | 2021-09-11T03:10:34.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | tobiaslee | null | tobiaslee/bert-2l-768h-uncased | 4 | null | transformers | 18,959 | # BERT-uncased-2L-768H
This is a converted pytorch checkpoint for bert with 2L trained from scratch.
See [Google BERT](https://github.com/google-research/bert) for details.
|
transformersbook/bert-base-uncased-issues-128 | 235bf67eef05b0d346243bee7ef27a1200c542e0 | 2022-02-05T16:57:43.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | transformersbook | null | transformersbook/bert-base-uncased-issues-128 | 4 | null | transformers | 18,960 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GitHub issues dataset. The model is used in Chapter 9: Dealing with Few to No Labels in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/09_few-to-no-labels.ipynb).
It achieves the following results on the evaluation set:
- Loss: 1.2520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0949 | 1.0 | 291 | 1.7072 |
| 1.649 | 2.0 | 582 | 1.4409 |
| 1.4835 | 3.0 | 873 | 1.4099 |
| 1.3938 | 4.0 | 1164 | 1.3858 |
| 1.3326 | 5.0 | 1455 | 1.2004 |
| 1.2949 | 6.0 | 1746 | 1.2955 |
| 1.2451 | 7.0 | 2037 | 1.2682 |
| 1.1992 | 8.0 | 2328 | 1.1938 |
| 1.1784 | 9.0 | 2619 | 1.1686 |
| 1.1397 | 10.0 | 2910 | 1.2050 |
| 1.1293 | 11.0 | 3201 | 1.2058 |
| 1.1006 | 12.0 | 3492 | 1.1680 |
| 1.0835 | 13.0 | 3783 | 1.2414 |
| 1.0757 | 14.0 | 4074 | 1.1522 |
| 1.062 | 15.0 | 4365 | 1.1176 |
| 1.0535 | 16.0 | 4656 | 1.2520 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.0
- Tokenizers 0.10.3
|
ttajun/bert_nm100k_posneg01 | cf0ef43056320ba7fa1330844a54bced1a2ecc94 | 2021-12-22T02:34:54.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ttajun | null | ttajun/bert_nm100k_posneg01 | 4 | null | transformers | 18,961 | Entry not found |
tucan9389/kcbert-base-finetuned-squad | 7e5ae2ab60737ee1a8b6858b6d24d81c6144bf9c | 2021-11-18T02:26:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:klue",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | tucan9389 | null | tucan9389/kcbert-base-finetuned-squad | 4 | null | transformers | 18,962 | ---
tags:
- generated_from_trainer
datasets:
- klue
model-index:
- name: kcbert-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kcbert-base-finetuned-squad
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2221 | 1.0 | 4245 | 1.2784 |
| 0.7673 | 2.0 | 8490 | 1.4099 |
| 0.4479 | 3.0 | 12735 | 1.6736 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
ubamba98/wav2vec2-xls-r-1b-ro | ec31f5386b508c5cadbe042dee6766ad70ed6385 | 2022-03-23T18:29:42.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ro",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ubamba98 | null | ubamba98/wav2vec2-xls-r-1b-ro | 4 | null | transformers | 18,963 | ---
language:
- ro
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xls-r-1b-ro
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: ro
metrics:
- name: Test WER
type: wer
value: 99.99
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ro
metrics:
- name: Test WER
type: wer
value: 99.98
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ro
metrics:
- name: Test WER
type: wer
value: 99.99
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-ro
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - RO dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1113
- Wer: 0.4770
- Cer: 0.0306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 0.7844 | 1.67 | 1500 | 0.3412 | 0.8600 | 0.0940 |
| 0.7272 | 3.34 | 3000 | 0.1926 | 0.6409 | 0.0527 |
| 0.6924 | 5.02 | 4500 | 0.1413 | 0.5722 | 0.0401 |
| 0.6327 | 6.69 | 6000 | 0.1252 | 0.5366 | 0.0371 |
| 0.6363 | 8.36 | 7500 | 0.1235 | 0.5741 | 0.0389 |
| 0.6238 | 10.03 | 9000 | 0.1180 | 0.5542 | 0.0362 |
| 0.6018 | 11.71 | 10500 | 0.1192 | 0.5694 | 0.0369 |
| 0.583 | 13.38 | 12000 | 0.1216 | 0.5772 | 0.0385 |
| 0.5643 | 15.05 | 13500 | 0.1195 | 0.5419 | 0.0371 |
| 0.5399 | 16.72 | 15000 | 0.1240 | 0.5224 | 0.0370 |
| 0.5529 | 18.39 | 16500 | 0.1174 | 0.5555 | 0.0367 |
| 0.5246 | 20.07 | 18000 | 0.1097 | 0.5047 | 0.0339 |
| 0.4936 | 21.74 | 19500 | 0.1225 | 0.5189 | 0.0382 |
| 0.4629 | 23.41 | 21000 | 0.1142 | 0.5047 | 0.0344 |
| 0.4463 | 25.08 | 22500 | 0.1168 | 0.4887 | 0.0339 |
| 0.4671 | 26.76 | 24000 | 0.1119 | 0.5073 | 0.0338 |
| 0.4359 | 28.43 | 25500 | 0.1206 | 0.5479 | 0.0363 |
| 0.4225 | 30.1 | 27000 | 0.1122 | 0.5170 | 0.0345 |
| 0.4038 | 31.77 | 28500 | 0.1159 | 0.5032 | 0.0343 |
| 0.4271 | 33.44 | 30000 | 0.1116 | 0.5126 | 0.0339 |
| 0.3867 | 35.12 | 31500 | 0.1101 | 0.4937 | 0.0327 |
| 0.3674 | 36.79 | 33000 | 0.1142 | 0.4940 | 0.0330 |
| 0.3607 | 38.46 | 34500 | 0.1106 | 0.5145 | 0.0327 |
| 0.3651 | 40.13 | 36000 | 0.1172 | 0.4921 | 0.0317 |
| 0.3268 | 41.81 | 37500 | 0.1093 | 0.4830 | 0.0310 |
| 0.3345 | 43.48 | 39000 | 0.1131 | 0.4760 | 0.0314 |
| 0.3236 | 45.15 | 40500 | 0.1132 | 0.4864 | 0.0317 |
| 0.312 | 46.82 | 42000 | 0.1124 | 0.4861 | 0.0315 |
| 0.3106 | 48.49 | 43500 | 0.1116 | 0.4745 | 0.0306 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
ubamba98/wav2vec2-xls-r-300m-CV8-ro | db8fade442d050a4d304c9f5b1dbda999431bc69 | 2022-03-23T18:29:44.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ro",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ubamba98 | null | ubamba98/wav2vec2-xls-r-300m-CV8-ro | 4 | null | transformers | 18,964 | ---
language:
- ro
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-300m-CV8-ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-CV8-ro
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - RO dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1578
- Wer: 0.6040
- Cer: 0.0475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 2.9736 | 3.62 | 500 | 2.9508 | 1.0 | 1.0 |
| 1.3293 | 7.25 | 1000 | 0.3330 | 0.8407 | 0.0862 |
| 0.956 | 10.87 | 1500 | 0.2042 | 0.6872 | 0.0602 |
| 0.9509 | 14.49 | 2000 | 0.2184 | 0.7088 | 0.0652 |
| 0.9272 | 18.12 | 2500 | 0.2312 | 0.7211 | 0.0703 |
| 0.8561 | 21.74 | 3000 | 0.2158 | 0.6838 | 0.0631 |
| 0.8258 | 25.36 | 3500 | 0.1970 | 0.6844 | 0.0601 |
| 0.7993 | 28.98 | 4000 | 0.1895 | 0.6698 | 0.0577 |
| 0.7525 | 32.61 | 4500 | 0.1845 | 0.6453 | 0.0550 |
| 0.7211 | 36.23 | 5000 | 0.1781 | 0.6274 | 0.0531 |
| 0.677 | 39.85 | 5500 | 0.1732 | 0.6188 | 0.0514 |
| 0.6517 | 43.48 | 6000 | 0.1691 | 0.6177 | 0.0503 |
| 0.6326 | 47.1 | 6500 | 0.1619 | 0.6045 | 0.0479 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
uclanlp/plbart-multi_task-weak | e58915b41a7e7c2ce7defd5b9d63891feb7bc845 | 2022-03-02T07:38:33.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-multi_task-weak | 4 | null | transformers | 18,965 | Entry not found |
uclanlp/plbart-ruby-en_XX | 61f0c8960d9b641f8f25d70adcd527aa049ac043 | 2021-11-09T17:10:05.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-ruby-en_XX | 4 | null | transformers | 18,966 | Entry not found |
uclanlp/plbart-single_task-all-generation | d688d8961b303af1bafc8f8e991e304d5681773a | 2022-03-02T07:29:15.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-single_task-all-generation | 4 | null | transformers | 18,967 | Entry not found |
uclanlp/plbart-single_task-weak-generation | de9cbb23b92932386904b9418de2d88fc24b886d | 2022-03-02T07:25:34.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-single_task-weak-generation | 4 | null | transformers | 18,968 | Entry not found |
uclanlp/visualbert-vcr-pre | 1183a34c9fb03cef2cf97a037f47d84ecc36facc | 2021-05-31T11:29:46.000Z | [
"pytorch",
"visual_bert",
"pretraining",
"transformers"
] | null | false | uclanlp | null | uclanlp/visualbert-vcr-pre | 4 | null | transformers | 18,969 | Entry not found |
uer/chinese_roberta_L-8_H-128 | 15dfe5d8fc174bf2dd45c41ed1cee629dbf23b22 | 2022-07-15T08:13:50.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/chinese_roberta_L-8_H-128 | 4 | null | transformers | 18,970 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "北京是[MASK]国的首都。"
---
# Chinese RoBERTa Miniatures
## Model description
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | H=128 | H=256 | H=512 | H=768 |
| -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
| **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] |
| **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] |
| **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] |
| **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] |
| **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] |
| **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512')
>>> unmasker("中国的首都是[MASK]京。")
[
{'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
'score': 0.8701988458633423,
'token': 1266,
'token_str': '北'},
{'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]',
'score': 0.1194809079170227,
'token': 1298,
'token_str': '南'},
{'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]',
'score': 0.0037803512532263994,
'token': 691,
'token_str': '东'},
{'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]',
'score': 0.0017127094324678183,
'token': 3249,
'token_str': '普'},
{'sequence': '[CLS] 中 国 的 首 都 是 望 京 。 [SEP]',
'score': 0.001687526935711503,
'token': 3307,
'token_str': '望'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128
[2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256
[2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512
[2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768
[4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128
[4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256
[4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512
[4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768
[6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128
[6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256
[6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512
[6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768
[8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128
[8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256
[8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512
[8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768
[10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128
[10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256
[10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512
[10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768
[12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128
[12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256
[12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512
[12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768 |
uf-aice-lab/SafeMathBot | 83b750465f2ce07768ce27e6fa7b393f35aafb6b | 2022-02-11T20:15:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"generation",
"math learning",
"education",
"license:mit"
] | text-generation | false | uf-aice-lab | null | uf-aice-lab/SafeMathBot | 4 | null | transformers | 18,971 | ---
language:
- en
tags:
- generation
- math learning
- education
license: mit
metrics:
- PerspectiveAPI
widget:
- text: "<bos><speaker1>Hello! My name is CL. Nice meeting y'all!<speaker2>[SAFE]"
example_title: "Safe Response"
- text: "<bos><speaker1>Hello! My name is CL. Nice meeting y'all!<speaker2>[UNSAFE]"
example_title: "Unsafe Response"
---
# Math-RoBERTa for NLP tasks in math learning environments
This model is fine-tuned with GPT2-xl with 8 Nvidia RTX 1080Ti GPUs and enhanced with conversation safety policies (e.g., threat, profanity, identity attack) using 3,000,000 math discussion posts by students and facilitators on Algebra Nation (https://www.mathnation.com/). SafeMathBot consists of 48 layers and over 1.5 billion parameters, consuming up to 6 gigabytes of disk space. Researchers can experiment with and finetune the model to help construct math conversational AI that can effectively avoid unsafe response generation. It was trained to allow researchers to control generated responses' safety using tags `[SAFE]` and `[UNSAFE]`
### Here is how to use it with texts in HuggingFace
```python
# A list of special tokens the model was trained with
special_tokens_dict = {
'additional_special_tokens': [
'[SAFE]','[UNSAFE]', '[OK]', '[SELF_M]','[SELF_F]', '[SELF_N]',
'[PARTNER_M]', '[PARTNER_F]', '[PARTNER_N]',
'[ABOUT_M]', '[ABOUT_F]', '[ABOUT_N]', '<speaker1>', '<speaker2>'
],
'bos_token': '<bos>',
'eos_token': '<eos>',
}
from transformers import AutoTokenizer, AutoModelForCausalLM
math_bot_tokenizer = AutoTokenizer.from_pretrained('uf-aice-lab/SafeMathBot')
safe_math_bot = AutoModelForCausalLM.from_pretrained('uf-aice-lab/SafeMathBot')
text = "Replace me by any text you'd like."
encoded_input = math_bot_tokenizer(text, return_tensors='pt')
output = safe_math_bot(**encoded_input)
``` |
unicamp-dl/mMiniLM-L6-v2-en-msmarco | da8ace7c6264e4da63dcc3b31700b8c4760e9299 | 2022-01-05T21:30:07.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"pt",
"dataset:msmarco",
"arxiv:2108.13897",
"transformers",
"msmarco",
"miniLM",
"tensorflow",
"en",
"license:mit"
] | text-classification | false | unicamp-dl | null | unicamp-dl/mMiniLM-L6-v2-en-msmarco | 4 | null | transformers | 18,972 | ---
language: pt
license: mit
tags:
- msmarco
- miniLM
- pytorch
- tensorflow
- en
datasets:
- msmarco
widget:
- text: "Texto de exemplo em português"
inference: false
---
# mMiniLM-L6 Reranker finetuned on English MS MARCO
## Introduction
mMiniLM-L6-v2-en-msmarco is a multilingual miniLM-based model fine-tuned on English MS MARCO passage dataset. Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
## Usage
```python
from transformers import AutoTokenizer, AutoModel
model_name = 'unicamp-dl/mMiniLM-L6-v2-en-msmarco'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
# Citation
If you use mMiniLM-L6-v2-en-msmarco, please cite:
@misc{bonifacio2021mmarco,
title={mMARCO: A Multilingual Version of the MS MARCO Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
eprint={2108.13897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
unicamp-dl/mMiniLM-L6-v2-pt-msmarco-v1 | 60c56025e861380d6bd71c057f5baafd360dd12d | 2022-01-05T21:29:37.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"pt",
"dataset:msmarco",
"arxiv:2108.13897",
"transformers",
"msmarco",
"miniLM",
"tensorflow",
"pt-br",
"license:mit"
] | text-classification | false | unicamp-dl | null | unicamp-dl/mMiniLM-L6-v2-pt-msmarco-v1 | 4 | null | transformers | 18,973 | ---
language: pt
license: mit
tags:
- msmarco
- miniLM
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- msmarco
widget:
- text: "Texto de exemplo em português"
inference: false
---
# mMiniLM-L6-v2 Reranker finetuned on mMARCO
## Introduction
mMiniLM-L6-v2-pt-msmarco-v1 is a multilingual miniLM-based model finetuned on a Portuguese translated version of MS MARCO passage dataset. In the version v1, the Portuguese dataset was translated using [Helsinki](https://huggingface.co/Helsinki-NLP) NMT model. Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
## Usage
```python
from transformers import AutoTokenizer, AutoModel
model_name = 'unicamp-dl/mMiniLM-L6-v2-pt-msmarco-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
# Citation
If you use mMiniLM-L6-v2-pt-msmarco-v1, please cite:
@misc{bonifacio2021mmarco,
title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
eprint={2108.13897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
valhalla/awesome-model | 96fb117d7d8fe5b15409db0093354dc4728317ba | 2022-02-01T16:26:26.000Z | [
"pytorch",
"awesome",
"transformers"
] | null | false | valhalla | null | valhalla/awesome-model | 4 | null | transformers | 18,974 | Entry not found |
valurank/distilroberta-mbfc-bias | 3b59ac1064696b63560c4ec081a9861e3edec32b | 2022-06-08T20:34:29.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:other",
"model-index"
] | text-classification | false | valurank | null | valurank/distilroberta-mbfc-bias | 4 | null | transformers | 18,975 | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: distilroberta-mbfc-bias
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-mbfc-bias
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the Proppy dataset, using political bias from mediabiasfactcheck.com as labels.
It achieves the following results on the evaluation set:
- Loss: 1.4130
- Acc: 0.6348
## Training and evaluation data
The training data used is the [proppy corpus](https://zenodo.org/record/3271522). Articles are labeled for political bias using the political bias of the source publication, as scored by mediabiasfactcheck.com. See [Proppy: Organizing the News Based on Their Propagandistic Content](https://propaganda.qcri.org/papers/elsarticle-template.pdf) for details.
To create a more balanced training set, common labels are downsampled to have a maximum of 2000 articles. The resulting label distribution in the training data is as follows:
```
extremeright 689
leastbiased 2000
left 783
leftcenter 2000
right 1260
rightcenter 1418
unknown 2000
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 12345
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9493 | 1.0 | 514 | 1.2765 | 0.4730 |
| 0.7376 | 2.0 | 1028 | 1.0003 | 0.5812 |
| 0.6702 | 3.0 | 1542 | 1.1294 | 0.5631 |
| 0.6161 | 4.0 | 2056 | 1.0439 | 0.6058 |
| 0.4934 | 5.0 | 2570 | 1.1196 | 0.6028 |
| 0.4558 | 6.0 | 3084 | 1.0993 | 0.5977 |
| 0.4717 | 7.0 | 3598 | 1.0308 | 0.6373 |
| 0.3961 | 8.0 | 4112 | 1.1291 | 0.6234 |
| 0.3829 | 9.0 | 4626 | 1.1554 | 0.6316 |
| 0.3442 | 10.0 | 5140 | 1.1548 | 0.6465 |
| 0.2505 | 11.0 | 5654 | 1.3605 | 0.6169 |
| 0.2105 | 12.0 | 6168 | 1.3310 | 0.6297 |
| 0.262 | 13.0 | 6682 | 1.2706 | 0.6383 |
| 0.2031 | 14.0 | 7196 | 1.3658 | 0.6378 |
| 0.2021 | 15.0 | 7710 | 1.4130 | 0.6348 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.7.1
- Datasets 1.11.0
- Tokenizers 0.10.3
|
vasudevgupta/bigbird-pegasus-large-bigpatent | 092182bbb12156e953b50aedf27320d0a755d716 | 2021-05-04T11:12:37.000Z | [
"pytorch",
"bigbird_pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | vasudevgupta | null | vasudevgupta/bigbird-pegasus-large-bigpatent | 4 | null | transformers | 18,976 | Moved here: https://huggingface.co/google/bigbird-pegasus-large-bigpatent |
vasudevgupta/biggan-mapping-model | 8094516857b1d1eadf0897af55c6ebe82edc863a | 2021-10-31T16:43:04.000Z | [
"pytorch",
"transformers"
] | null | false | vasudevgupta | null | vasudevgupta/biggan-mapping-model | 4 | null | transformers | 18,977 | Entry not found |
verloop/Hinglish-DistilBert-Class | 28df970f806c69128f4e33ca308eafcb696cd7f2 | 2021-05-20T08:59:21.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | verloop | null | verloop/Hinglish-DistilBert-Class | 4 | null | transformers | 18,978 | Entry not found |
vesteinn/IceBERT | ff91b3d4261a480eb38b4f5c16356d9083a31625 | 2021-12-17T07:40:29.000Z | [
"pytorch",
"roberta",
"fill-mask",
"is",
"transformers",
"icelandic",
"masked-lm",
"license:agpl-3.0",
"autotrain_compatible"
] | fill-mask | false | vesteinn | null | vesteinn/IceBERT | 4 | null | transformers | 18,979 | ---
language: is
widget:
- text: Má bjóða þér <mask> í kvöld?
- text: Forseti <mask> er ágæt.
- text: Súpan var <mask> á bragðið.
tags:
- roberta
- icelandic
- masked-lm
- pytorch
license: agpl-3.0
---
# IceBERT
IceBERT was trained with fairseq using the RoBERTa-base architecture. The training data used is shown in the table below.
| Dataset | Size | Tokens |
|------------------------------------------------------|---------|--------|
| Icelandic Gigaword Corpus v20.05 (IGC) | 8.2 GB | 1,388M |
| Icelandic Common Crawl Corpus (IC3) | 4.9 GB | 824M |
| Greynir News articles | 456 MB | 76M |
| Icelandic Sagas | 9 MB | 1.7M |
| Open Icelandic e-books (Rafbókavefurinn) | 14 MB | 2.6M |
| Data from the medical library of Landspitali | 33 MB | 5.2M |
| Student theses from Icelandic universities (Skemman) | 2.2 GB | 367M |
| Total | 15.8 GB | 2,664M |
|
vietnguyen39/Albert_vi_QA | 6e82f73674c869b4ad170d7150fe9b733b2885cc | 2021-11-07T01:49:17.000Z | [
"pytorch"
] | null | false | vietnguyen39 | null | vietnguyen39/Albert_vi_QA | 4 | null | null | 18,980 | Entry not found |
vijayv500/DialoGPT-small-Big-Bang-Theory-Series-Transcripts | 040bc5487be7217252bbb68cb85fc1759180f536 | 2021-10-29T07:39:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | vijayv500 | null | vijayv500/DialoGPT-small-Big-Bang-Theory-Series-Transcripts | 4 | null | transformers | 18,981 | ---
tags:
- conversational
license: mit
---
## I fine-tuned DialoGPT-small model on "The Big Bang Theory" TV Series dataset from Kaggle (https://www.kaggle.com/mitramir5/the-big-bang-theory-series-transcript)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("vijayv500/DialoGPT-small-Big-Bang-Theory-Series-Transcripts")
model = AutoModelForCausalLM.from_pretrained("vijayv500/DialoGPT-small-Big-Bang-Theory-Series-Transcripts")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature = 0.8
)
# pretty print last ouput tokens from bot
print("TBBT Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
vinhood/wineberto-italian-cased | 43d86243ad20ae121749805677435a4a34431adc | 2022-01-10T08:26:52.000Z | [
"pytorch",
"bert",
"fill-mask",
"it",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | vinhood | null | vinhood/wineberto-italian-cased | 4 | null | transformers | 18,982 | ---
language: it
license: mit
widget:
- text: "Con del pesce bisogna bere un bicchiere di vino [MASK]."
- text: "Con la carne c'è bisogno del vino [MASK]."
- text: "A tavola non può mancare del buon [MASK]."
---
# WineBERTo 🍷🥂
**wineberto-italian-cased** is a BERT model obtained by MLM adaptive-tuning [**bert-base-italian-xxl-cased**](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on Italian drink recipes and wine descriptions, approximately 77k sentences (3.3M words).
**Author:** Cristiano De Nobili ([@denocris](https://twitter.com/denocris) on Twitter, [LinkedIn](https://www.linkedin.com/in/cristiano-de-nobili/)) for [VINHOOD](https://www.vinhood.com/en/).
<p>
<img src="https://drive.google.com/uc?export=view&id=1dco9I9uzevP2V6oku1salIYcovUAeqWE" width="400"> </br>
</p>
# Perplexity
Test set: 14k sentences about wine.
| Model | Perplexity |
| ------ | ------ |
| wineberto-italian-cased | **2.29** |
| bert-base-italian-xxl-cased | 4.60 |
# Usage
```python
from transformers import AutoModel, AutoTokenizer
model_name = "vinhood/wineberto-italian-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
``` |
vionwinnie/albert-goodnotes-reddit | 666c505b6e56f0a0cae7ccda8b70886d81d9aaa6 | 2021-07-03T22:07:07.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | vionwinnie | null | vionwinnie/albert-goodnotes-reddit | 4 | null | transformers | 18,983 | Entry not found |
vishnun/distilgpt2-finetuned-distilgpt2-med_articles | c1946ffc55744cd6ddc45dd105ad371b5992e803 | 2021-08-19T10:23:17.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-generation | false | vishnun | null | vishnun/distilgpt2-finetuned-distilgpt2-med_articles | 4 | null | transformers | 18,984 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: distilgpt2-finetuned-distilgpt2-med_articles
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-distilgpt2-med_articles
This model is a fine-tuned version of [vishnun/distilgpt2-finetuned-distilgpt2-med_articles](https://huggingface.co/vishnun/distilgpt2-finetuned-distilgpt2-med_articles) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3171
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 65 | 3.3417 |
| No log | 2.0 | 130 | 3.3300 |
| No log | 3.0 | 195 | 3.3231 |
| No log | 4.0 | 260 | 3.3172 |
| No log | 5.0 | 325 | 3.3171 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
vitvit/XLMRFineTuneonEnglishNERFrozenBase | 7ecabb3e1c74ed4019eb6f081fc1ce4edd6c6e2a | 2021-08-31T10:40:00.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
] | token-classification | false | vitvit | null | vitvit/XLMRFineTuneonEnglishNERFrozenBase | 4 | null | transformers | 18,985 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: xlm-roberta-base-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.90090188725725
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-ner
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4181
- Precision: 0.6464
- Recall: 0.4904
- F1: 0.5577
- Accuracy: 0.9009
- It just needs more training time
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.9474 | 1.0 | 2809 | 0.9105 | 0.0 | 0.0 | 0.0 | 0.7879 |
| 0.7728 | 2.0 | 5618 | 0.8002 | 0.0 | 0.0 | 0.0 | 0.7879 |
| 0.7209 | 3.0 | 8427 | 0.7329 | 0.1818 | 0.0002 | 0.0004 | 0.7881 |
| 0.6666 | 4.0 | 11236 | 0.6824 | 0.27 | 0.0050 | 0.0099 | 0.7903 |
| 0.6372 | 5.0 | 14045 | 0.6416 | 0.3302 | 0.0261 | 0.0484 | 0.7988 |
| 0.5982 | 6.0 | 16854 | 0.6084 | 0.4188 | 0.0686 | 0.1179 | 0.8128 |
| 0.5812 | 7.0 | 19663 | 0.5800 | 0.4799 | 0.1152 | 0.1858 | 0.8266 |
| 0.5684 | 8.0 | 22472 | 0.5569 | 0.5255 | 0.1647 | 0.2508 | 0.8380 |
| 0.5389 | 9.0 | 25281 | 0.5375 | 0.5564 | 0.2128 | 0.3078 | 0.8482 |
| 0.5307 | 10.0 | 28090 | 0.5205 | 0.5749 | 0.2550 | 0.3533 | 0.8567 |
| 0.5106 | 11.0 | 30899 | 0.5064 | 0.5916 | 0.2916 | 0.3906 | 0.8636 |
| 0.4921 | 12.0 | 33708 | 0.4938 | 0.6033 | 0.3236 | 0.4212 | 0.8698 |
| 0.4967 | 13.0 | 36517 | 0.4825 | 0.6106 | 0.3544 | 0.4485 | 0.8758 |
| 0.4707 | 14.0 | 39326 | 0.4733 | 0.6199 | 0.3753 | 0.4676 | 0.8798 |
| 0.4704 | 15.0 | 42135 | 0.4654 | 0.6246 | 0.3927 | 0.4823 | 0.8830 |
| 0.4654 | 16.0 | 44944 | 0.4574 | 0.6285 | 0.4159 | 0.5006 | 0.8871 |
| 0.4314 | 17.0 | 47753 | 0.4514 | 0.6321 | 0.4240 | 0.5075 | 0.8887 |
| 0.47 | 18.0 | 50562 | 0.4459 | 0.6358 | 0.4380 | 0.5187 | 0.8911 |
| 0.4486 | 19.0 | 53371 | 0.4410 | 0.6399 | 0.4480 | 0.5271 | 0.8929 |
| 0.4411 | 20.0 | 56180 | 0.4367 | 0.6413 | 0.4561 | 0.5331 | 0.8944 |
| 0.4333 | 21.0 | 58989 | 0.4328 | 0.6411 | 0.4644 | 0.5386 | 0.8959 |
| 0.4402 | 22.0 | 61798 | 0.4295 | 0.6425 | 0.4687 | 0.5420 | 0.8968 |
| 0.4287 | 23.0 | 64607 | 0.4268 | 0.6442 | 0.4735 | 0.5458 | 0.8978 |
| 0.4336 | 24.0 | 67416 | 0.4245 | 0.6441 | 0.4771 | 0.5482 | 0.8985 |
| 0.4243 | 25.0 | 70225 | 0.4224 | 0.6454 | 0.4817 | 0.5517 | 0.8993 |
| 0.4153 | 26.0 | 73034 | 0.4209 | 0.6469 | 0.4846 | 0.5541 | 0.8998 |
| 0.4286 | 27.0 | 75843 | 0.4197 | 0.6467 | 0.4865 | 0.5553 | 0.9002 |
| 0.436 | 28.0 | 78652 | 0.4188 | 0.6466 | 0.4887 | 0.5566 | 0.9006 |
| 0.427 | 29.0 | 81461 | 0.4183 | 0.6465 | 0.4900 | 0.5575 | 0.9008 |
| 0.4317 | 30.0 | 84270 | 0.4181 | 0.6464 | 0.4904 | 0.5577 | 0.9009 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
vitvit/XLMRFineTuneonEnglishNERFrozenBase30epochs | 546708101059d37f3e46c401a8347c2d9c9b51b8 | 2021-09-01T05:16:40.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
] | token-classification | false | vitvit | null | vitvit/XLMRFineTuneonEnglishNERFrozenBase30epochs | 4 | null | transformers | 18,986 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: xlm-roberta-base-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.9463931352557172
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-ner
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2327
- Precision: 0.7363
- Recall: 0.7265
- F1: 0.7314
- Accuracy: 0.9464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.9632 | 1.0 | 2809 | 0.9072 | 0.0 | 0.0 | 0.0 | 0.7879 |
| 0.7652 | 2.0 | 5618 | 0.7899 | 0.0 | 0.0 | 0.0 | 0.7880 |
| 0.7118 | 3.0 | 8427 | 0.7207 | 0.1429 | 0.0004 | 0.0007 | 0.7883 |
| 0.6548 | 4.0 | 11236 | 0.6674 | 0.2934 | 0.0107 | 0.0206 | 0.7929 |
| 0.622 | 5.0 | 14045 | 0.6234 | 0.3741 | 0.0460 | 0.0819 | 0.8064 |
| 0.5802 | 6.0 | 16854 | 0.5871 | 0.4617 | 0.1024 | 0.1677 | 0.8230 |
| 0.5605 | 7.0 | 19663 | 0.5557 | 0.5237 | 0.1611 | 0.2464 | 0.8380 |
| 0.5445 | 8.0 | 22472 | 0.5297 | 0.5631 | 0.2264 | 0.3230 | 0.8514 |
| 0.5138 | 9.0 | 25281 | 0.5075 | 0.5925 | 0.2896 | 0.3890 | 0.8634 |
| 0.5029 | 10.0 | 28090 | 0.4879 | 0.6077 | 0.3405 | 0.4364 | 0.8730 |
| 0.4813 | 11.0 | 30899 | 0.4716 | 0.6194 | 0.3822 | 0.4727 | 0.8807 |
| 0.4606 | 12.0 | 33708 | 0.4564 | 0.6306 | 0.4184 | 0.5030 | 0.8873 |
| 0.4616 | 13.0 | 36517 | 0.4431 | 0.6396 | 0.4482 | 0.5271 | 0.8929 |
| 0.4366 | 14.0 | 39326 | 0.4315 | 0.6441 | 0.4681 | 0.5422 | 0.8968 |
| 0.4334 | 15.0 | 42135 | 0.4218 | 0.6489 | 0.4830 | 0.5538 | 0.8995 |
| 0.4259 | 16.0 | 44944 | 0.4112 | 0.6485 | 0.5070 | 0.5691 | 0.9040 |
| 0.3912 | 17.0 | 47753 | 0.4031 | 0.6526 | 0.5159 | 0.5763 | 0.9058 |
| 0.4274 | 18.0 | 50562 | 0.3955 | 0.6557 | 0.5294 | 0.5858 | 0.9083 |
| 0.4034 | 19.0 | 53371 | 0.3885 | 0.6608 | 0.5407 | 0.5948 | 0.9106 |
| 0.3952 | 20.0 | 56180 | 0.3819 | 0.6620 | 0.5523 | 0.6022 | 0.9126 |
| 0.3862 | 21.0 | 58989 | 0.3755 | 0.6622 | 0.5652 | 0.6099 | 0.9148 |
| 0.3887 | 22.0 | 61798 | 0.3698 | 0.6662 | 0.5725 | 0.6158 | 0.9163 |
| 0.3764 | 23.0 | 64607 | 0.3648 | 0.6671 | 0.5788 | 0.6198 | 0.9176 |
| 0.3791 | 24.0 | 67416 | 0.3599 | 0.6686 | 0.5838 | 0.6234 | 0.9185 |
| 0.3684 | 25.0 | 70225 | 0.3551 | 0.6684 | 0.5926 | 0.6282 | 0.9201 |
| 0.3573 | 26.0 | 73034 | 0.3515 | 0.6717 | 0.5954 | 0.6312 | 0.9210 |
| 0.367 | 27.0 | 75843 | 0.3470 | 0.6711 | 0.6022 | 0.6348 | 0.9221 |
| 0.3714 | 28.0 | 78652 | 0.3433 | 0.6735 | 0.6085 | 0.6393 | 0.9233 |
| 0.3594 | 29.0 | 81461 | 0.3400 | 0.6738 | 0.6109 | 0.6408 | 0.9239 |
| 0.3626 | 30.0 | 84270 | 0.3366 | 0.6765 | 0.6159 | 0.6448 | 0.9249 |
| 0.3519 | 31.0 | 87079 | 0.3334 | 0.6765 | 0.6183 | 0.6461 | 0.9254 |
| 0.3591 | 32.0 | 89888 | 0.3305 | 0.6767 | 0.6220 | 0.6482 | 0.9263 |
| 0.3424 | 33.0 | 92697 | 0.3279 | 0.6785 | 0.6243 | 0.6502 | 0.9268 |
| 0.3514 | 34.0 | 95506 | 0.3249 | 0.6794 | 0.6290 | 0.6532 | 0.9277 |
| 0.3463 | 35.0 | 98315 | 0.3226 | 0.6806 | 0.6302 | 0.6544 | 0.9279 |
| 0.3516 | 36.0 | 101124 | 0.3200 | 0.6812 | 0.6327 | 0.6561 | 0.9283 |
| 0.3307 | 37.0 | 103933 | 0.3178 | 0.6809 | 0.6355 | 0.6574 | 0.9290 |
| 0.343 | 38.0 | 106742 | 0.3155 | 0.6830 | 0.6384 | 0.6599 | 0.9294 |
| 0.3415 | 39.0 | 109551 | 0.3134 | 0.6842 | 0.6416 | 0.6622 | 0.9299 |
| 0.3307 | 40.0 | 112360 | 0.3112 | 0.6843 | 0.6437 | 0.6634 | 0.9305 |
| 0.3428 | 41.0 | 115169 | 0.3093 | 0.6839 | 0.6455 | 0.6641 | 0.9310 |
| 0.3348 | 42.0 | 117978 | 0.3074 | 0.6849 | 0.6474 | 0.6656 | 0.9312 |
| 0.3282 | 43.0 | 120787 | 0.3057 | 0.6848 | 0.6486 | 0.6662 | 0.9316 |
| 0.3346 | 44.0 | 123596 | 0.3040 | 0.6860 | 0.6497 | 0.6674 | 0.9318 |
| 0.3349 | 45.0 | 126405 | 0.3023 | 0.6867 | 0.6524 | 0.6691 | 0.9324 |
| 0.3323 | 46.0 | 129214 | 0.3010 | 0.6895 | 0.6522 | 0.6703 | 0.9326 |
| 0.3258 | 47.0 | 132023 | 0.2992 | 0.6901 | 0.6549 | 0.6720 | 0.9331 |
| 0.3276 | 48.0 | 134832 | 0.2978 | 0.6915 | 0.6563 | 0.6734 | 0.9334 |
| 0.3345 | 49.0 | 137641 | 0.2962 | 0.6916 | 0.6585 | 0.6746 | 0.9337 |
| 0.3138 | 50.0 | 140450 | 0.2949 | 0.6926 | 0.6594 | 0.6756 | 0.9339 |
| 0.3285 | 51.0 | 143259 | 0.2935 | 0.6928 | 0.6601 | 0.6760 | 0.9340 |
| 0.3135 | 52.0 | 146068 | 0.2925 | 0.6931 | 0.6610 | 0.6767 | 0.9343 |
| 0.3206 | 53.0 | 148877 | 0.2914 | 0.6955 | 0.6624 | 0.6785 | 0.9346 |
| 0.3105 | 54.0 | 151686 | 0.2899 | 0.6953 | 0.6638 | 0.6792 | 0.9348 |
| 0.3045 | 55.0 | 154495 | 0.2887 | 0.6968 | 0.6651 | 0.6806 | 0.9352 |
| 0.3082 | 56.0 | 157304 | 0.2875 | 0.6985 | 0.6680 | 0.6829 | 0.9356 |
| 0.3229 | 57.0 | 160113 | 0.2865 | 0.6998 | 0.6688 | 0.6840 | 0.9357 |
| 0.3113 | 58.0 | 162922 | 0.2855 | 0.7012 | 0.6704 | 0.6855 | 0.9361 |
| 0.3047 | 59.0 | 165731 | 0.2843 | 0.7010 | 0.6713 | 0.6858 | 0.9362 |
| 0.3028 | 60.0 | 168540 | 0.2833 | 0.7024 | 0.6728 | 0.6873 | 0.9365 |
| 0.3082 | 61.0 | 171349 | 0.2826 | 0.7050 | 0.6740 | 0.6892 | 0.9367 |
| 0.3054 | 62.0 | 174158 | 0.2814 | 0.7038 | 0.6752 | 0.6892 | 0.9369 |
| 0.3124 | 63.0 | 176967 | 0.2804 | 0.7043 | 0.6770 | 0.6904 | 0.9372 |
| 0.3138 | 64.0 | 179776 | 0.2796 | 0.7034 | 0.6783 | 0.6906 | 0.9374 |
| 0.3022 | 65.0 | 182585 | 0.2787 | 0.7045 | 0.6785 | 0.6912 | 0.9374 |
| 0.3142 | 66.0 | 185394 | 0.2778 | 0.7059 | 0.6792 | 0.6923 | 0.9377 |
| 0.3043 | 67.0 | 188203 | 0.2770 | 0.7074 | 0.6808 | 0.6938 | 0.9381 |
| 0.3053 | 68.0 | 191012 | 0.2762 | 0.7069 | 0.6813 | 0.6938 | 0.9381 |
| 0.3147 | 69.0 | 193821 | 0.2752 | 0.7065 | 0.6832 | 0.6947 | 0.9383 |
| 0.2998 | 70.0 | 196630 | 0.2746 | 0.7086 | 0.6831 | 0.6956 | 0.9384 |
| 0.2951 | 71.0 | 199439 | 0.2739 | 0.7087 | 0.6831 | 0.6957 | 0.9385 |
| 0.3087 | 72.0 | 202248 | 0.2733 | 0.7089 | 0.6838 | 0.6961 | 0.9385 |
| 0.3059 | 73.0 | 205057 | 0.2723 | 0.7087 | 0.6867 | 0.6976 | 0.9389 |
| 0.2983 | 74.0 | 207866 | 0.2717 | 0.7080 | 0.6860 | 0.6968 | 0.9389 |
| 0.2994 | 75.0 | 210675 | 0.2710 | 0.7094 | 0.6867 | 0.6978 | 0.9390 |
| 0.3056 | 76.0 | 213484 | 0.2706 | 0.7090 | 0.6854 | 0.6970 | 0.9389 |
| 0.3118 | 77.0 | 216293 | 0.2698 | 0.7099 | 0.6869 | 0.6982 | 0.9391 |
| 0.296 | 78.0 | 219102 | 0.2691 | 0.7093 | 0.6886 | 0.6988 | 0.9393 |
| 0.3111 | 79.0 | 221911 | 0.2687 | 0.7111 | 0.6885 | 0.6996 | 0.9395 |
| 0.2961 | 80.0 | 224720 | 0.2678 | 0.7103 | 0.6895 | 0.6997 | 0.9397 |
| 0.3043 | 81.0 | 227529 | 0.2674 | 0.7111 | 0.6899 | 0.7003 | 0.9399 |
| 0.2924 | 82.0 | 230338 | 0.2667 | 0.7125 | 0.6920 | 0.7021 | 0.9401 |
| 0.2947 | 83.0 | 233147 | 0.2660 | 0.7107 | 0.6920 | 0.7012 | 0.9402 |
| 0.3035 | 84.0 | 235956 | 0.2656 | 0.7126 | 0.6922 | 0.7023 | 0.9402 |
| 0.3034 | 85.0 | 238765 | 0.2648 | 0.7133 | 0.6937 | 0.7034 | 0.9404 |
| 0.297 | 86.0 | 241574 | 0.2645 | 0.7143 | 0.6946 | 0.7043 | 0.9406 |
| 0.2943 | 87.0 | 244383 | 0.2639 | 0.7145 | 0.6955 | 0.7049 | 0.9407 |
| 0.2929 | 88.0 | 247192 | 0.2636 | 0.7125 | 0.6940 | 0.7031 | 0.9406 |
| 0.2974 | 89.0 | 250001 | 0.2628 | 0.7149 | 0.6975 | 0.7061 | 0.9410 |
| 0.2917 | 90.0 | 252810 | 0.2626 | 0.7143 | 0.6949 | 0.7045 | 0.9408 |
| 0.3031 | 91.0 | 255619 | 0.2620 | 0.7147 | 0.6958 | 0.7051 | 0.9409 |
| 0.3053 | 92.0 | 258428 | 0.2612 | 0.7149 | 0.6977 | 0.7062 | 0.9411 |
| 0.2921 | 93.0 | 261237 | 0.2610 | 0.7164 | 0.6969 | 0.7065 | 0.9411 |
| 0.2934 | 94.0 | 264046 | 0.2606 | 0.7160 | 0.6969 | 0.7063 | 0.9412 |
| 0.2863 | 95.0 | 266855 | 0.2601 | 0.7160 | 0.6973 | 0.7066 | 0.9412 |
| 0.2918 | 96.0 | 269664 | 0.2595 | 0.7167 | 0.6986 | 0.7076 | 0.9413 |
| 0.2926 | 97.0 | 272473 | 0.2591 | 0.7171 | 0.7004 | 0.7086 | 0.9415 |
| 0.2844 | 98.0 | 275282 | 0.2588 | 0.7171 | 0.6997 | 0.7083 | 0.9414 |
| 0.2924 | 99.0 | 278091 | 0.2585 | 0.7175 | 0.6986 | 0.7080 | 0.9414 |
| 0.2931 | 100.0 | 280900 | 0.2580 | 0.7178 | 0.6997 | 0.7086 | 0.9415 |
| 0.289 | 101.0 | 283709 | 0.2575 | 0.7184 | 0.7013 | 0.7098 | 0.9417 |
| 0.2892 | 102.0 | 286518 | 0.2570 | 0.7178 | 0.7024 | 0.7100 | 0.9418 |
| 0.285 | 103.0 | 289327 | 0.2567 | 0.7184 | 0.7013 | 0.7098 | 0.9417 |
| 0.2809 | 104.0 | 292136 | 0.2565 | 0.7192 | 0.7013 | 0.7102 | 0.9418 |
| 0.2802 | 105.0 | 294945 | 0.2561 | 0.7198 | 0.7014 | 0.7105 | 0.9420 |
| 0.2878 | 106.0 | 297754 | 0.2556 | 0.7192 | 0.7022 | 0.7106 | 0.9419 |
| 0.2853 | 107.0 | 300563 | 0.2554 | 0.7201 | 0.7017 | 0.7108 | 0.9420 |
| 0.2871 | 108.0 | 303372 | 0.2549 | 0.7203 | 0.7038 | 0.7119 | 0.9422 |
| 0.2904 | 109.0 | 306181 | 0.2545 | 0.7205 | 0.7043 | 0.7123 | 0.9422 |
| 0.2848 | 110.0 | 308990 | 0.2543 | 0.7203 | 0.7031 | 0.7116 | 0.9423 |
| 0.2933 | 111.0 | 311799 | 0.2538 | 0.7198 | 0.7046 | 0.7121 | 0.9423 |
| 0.2885 | 112.0 | 314608 | 0.2534 | 0.7198 | 0.7056 | 0.7126 | 0.9425 |
| 0.2813 | 113.0 | 317417 | 0.2532 | 0.7205 | 0.7058 | 0.7131 | 0.9425 |
| 0.2858 | 114.0 | 320226 | 0.2528 | 0.7202 | 0.7067 | 0.7134 | 0.9426 |
| 0.2871 | 115.0 | 323035 | 0.2525 | 0.7216 | 0.7075 | 0.7145 | 0.9427 |
| 0.2725 | 116.0 | 325844 | 0.2522 | 0.7220 | 0.7065 | 0.7142 | 0.9428 |
| 0.2887 | 117.0 | 328653 | 0.2519 | 0.7222 | 0.7068 | 0.7144 | 0.9428 |
| 0.2773 | 118.0 | 331462 | 0.2514 | 0.7211 | 0.7079 | 0.7145 | 0.9428 |
| 0.2831 | 119.0 | 334271 | 0.2513 | 0.7227 | 0.7078 | 0.7152 | 0.9429 |
| 0.2924 | 120.0 | 337080 | 0.2508 | 0.7239 | 0.7091 | 0.7164 | 0.9431 |
| 0.2944 | 121.0 | 339889 | 0.2507 | 0.7244 | 0.7090 | 0.7166 | 0.9431 |
| 0.2887 | 122.0 | 342698 | 0.2506 | 0.7248 | 0.7088 | 0.7167 | 0.9431 |
| 0.2826 | 123.0 | 345507 | 0.2501 | 0.7247 | 0.7100 | 0.7173 | 0.9432 |
| 0.2795 | 124.0 | 348316 | 0.2500 | 0.7247 | 0.7090 | 0.7167 | 0.9431 |
| 0.2855 | 125.0 | 351125 | 0.2496 | 0.7259 | 0.7104 | 0.7180 | 0.9433 |
| 0.2797 | 126.0 | 353934 | 0.2494 | 0.7244 | 0.7101 | 0.7171 | 0.9433 |
| 0.2804 | 127.0 | 356743 | 0.2491 | 0.7247 | 0.7097 | 0.7171 | 0.9433 |
| 0.286 | 128.0 | 359552 | 0.2488 | 0.7238 | 0.7096 | 0.7166 | 0.9433 |
| 0.2785 | 129.0 | 362361 | 0.2487 | 0.7237 | 0.7091 | 0.7163 | 0.9432 |
| 0.284 | 130.0 | 365170 | 0.2484 | 0.7238 | 0.7104 | 0.7170 | 0.9434 |
| 0.2757 | 131.0 | 367979 | 0.2480 | 0.725 | 0.7117 | 0.7183 | 0.9436 |
| 0.286 | 132.0 | 370788 | 0.2477 | 0.7248 | 0.7117 | 0.7182 | 0.9436 |
| 0.2874 | 133.0 | 373597 | 0.2476 | 0.7249 | 0.7115 | 0.7181 | 0.9436 |
| 0.2796 | 134.0 | 376406 | 0.2474 | 0.7249 | 0.7119 | 0.7183 | 0.9437 |
| 0.2851 | 135.0 | 379215 | 0.2471 | 0.7247 | 0.7115 | 0.7180 | 0.9437 |
| 0.2833 | 136.0 | 382024 | 0.2469 | 0.7255 | 0.7124 | 0.7189 | 0.9438 |
| 0.2859 | 137.0 | 384833 | 0.2466 | 0.7261 | 0.7126 | 0.7193 | 0.9438 |
| 0.2903 | 138.0 | 387642 | 0.2464 | 0.7261 | 0.7131 | 0.7195 | 0.9439 |
| 0.2836 | 139.0 | 390451 | 0.2462 | 0.7262 | 0.7129 | 0.7195 | 0.9439 |
| 0.282 | 140.0 | 393260 | 0.2461 | 0.7258 | 0.7119 | 0.7188 | 0.9438 |
| 0.2886 | 141.0 | 396069 | 0.2459 | 0.7254 | 0.7127 | 0.7190 | 0.9439 |
| 0.2759 | 142.0 | 398878 | 0.2457 | 0.7259 | 0.7130 | 0.7194 | 0.9439 |
| 0.2701 | 143.0 | 401687 | 0.2455 | 0.7267 | 0.7132 | 0.7199 | 0.9439 |
| 0.2872 | 144.0 | 404496 | 0.2452 | 0.7262 | 0.7135 | 0.7198 | 0.9440 |
| 0.2797 | 145.0 | 407305 | 0.2451 | 0.7264 | 0.7135 | 0.7199 | 0.9440 |
| 0.2798 | 146.0 | 410114 | 0.2449 | 0.7256 | 0.7130 | 0.7192 | 0.9440 |
| 0.2677 | 147.0 | 412923 | 0.2446 | 0.7264 | 0.7139 | 0.7201 | 0.9441 |
| 0.2713 | 148.0 | 415732 | 0.2445 | 0.7264 | 0.7129 | 0.7196 | 0.9440 |
| 0.2736 | 149.0 | 418541 | 0.2442 | 0.7268 | 0.7141 | 0.7204 | 0.9442 |
| 0.2807 | 150.0 | 421350 | 0.2440 | 0.7270 | 0.7143 | 0.7206 | 0.9442 |
| 0.2777 | 151.0 | 424159 | 0.2437 | 0.7269 | 0.7151 | 0.7210 | 0.9443 |
| 0.2703 | 152.0 | 426968 | 0.2437 | 0.7279 | 0.7153 | 0.7215 | 0.9444 |
| 0.2701 | 153.0 | 429777 | 0.2434 | 0.7277 | 0.7153 | 0.7214 | 0.9444 |
| 0.2693 | 154.0 | 432586 | 0.2433 | 0.7271 | 0.7148 | 0.7209 | 0.9443 |
| 0.2894 | 155.0 | 435395 | 0.2430 | 0.7275 | 0.7158 | 0.7216 | 0.9445 |
| 0.2855 | 156.0 | 438204 | 0.2430 | 0.7290 | 0.7165 | 0.7227 | 0.9446 |
| 0.2874 | 157.0 | 441013 | 0.2428 | 0.7292 | 0.7178 | 0.7235 | 0.9448 |
| 0.2745 | 158.0 | 443822 | 0.2427 | 0.7296 | 0.7171 | 0.7233 | 0.9448 |
| 0.2842 | 159.0 | 446631 | 0.2424 | 0.7294 | 0.7180 | 0.7236 | 0.9448 |
| 0.281 | 160.0 | 449440 | 0.2423 | 0.7293 | 0.7177 | 0.7234 | 0.9448 |
| 0.2655 | 161.0 | 452249 | 0.2421 | 0.7293 | 0.7183 | 0.7237 | 0.9448 |
| 0.2701 | 162.0 | 455058 | 0.2419 | 0.7287 | 0.7170 | 0.7228 | 0.9447 |
| 0.2787 | 163.0 | 457867 | 0.2418 | 0.7286 | 0.7170 | 0.7227 | 0.9446 |
| 0.2779 | 164.0 | 460676 | 0.2416 | 0.7287 | 0.7174 | 0.7230 | 0.9447 |
| 0.2926 | 165.0 | 463485 | 0.2416 | 0.7299 | 0.7172 | 0.7235 | 0.9447 |
| 0.2751 | 166.0 | 466294 | 0.2413 | 0.7298 | 0.7185 | 0.7241 | 0.9449 |
| 0.2756 | 167.0 | 469103 | 0.2412 | 0.7299 | 0.7185 | 0.7242 | 0.9449 |
| 0.2792 | 168.0 | 471912 | 0.2411 | 0.7301 | 0.7184 | 0.7242 | 0.9449 |
| 0.2722 | 169.0 | 474721 | 0.2409 | 0.7305 | 0.7190 | 0.7247 | 0.9450 |
| 0.2719 | 170.0 | 477530 | 0.2408 | 0.7306 | 0.7184 | 0.7245 | 0.9449 |
| 0.2736 | 171.0 | 480339 | 0.2407 | 0.7307 | 0.7188 | 0.7247 | 0.9450 |
| 0.2805 | 172.0 | 483148 | 0.2404 | 0.7311 | 0.7199 | 0.7255 | 0.9451 |
| 0.2762 | 173.0 | 485957 | 0.2402 | 0.7313 | 0.7205 | 0.7259 | 0.9452 |
| 0.2717 | 174.0 | 488766 | 0.2402 | 0.7316 | 0.7195 | 0.7255 | 0.9451 |
| 0.2657 | 175.0 | 491575 | 0.2400 | 0.7314 | 0.7195 | 0.7254 | 0.9451 |
| 0.276 | 176.0 | 494384 | 0.2398 | 0.7309 | 0.7197 | 0.7253 | 0.9452 |
| 0.2767 | 177.0 | 497193 | 0.2397 | 0.7314 | 0.7202 | 0.7258 | 0.9452 |
| 0.2672 | 178.0 | 500002 | 0.2396 | 0.7309 | 0.7197 | 0.7252 | 0.9451 |
| 0.2727 | 179.0 | 502811 | 0.2395 | 0.7316 | 0.7202 | 0.7258 | 0.9453 |
| 0.2746 | 180.0 | 505620 | 0.2394 | 0.7314 | 0.7202 | 0.7258 | 0.9453 |
| 0.2704 | 181.0 | 508429 | 0.2392 | 0.7312 | 0.7203 | 0.7257 | 0.9453 |
| 0.2927 | 182.0 | 511238 | 0.2392 | 0.7314 | 0.7199 | 0.7256 | 0.9453 |
| 0.2705 | 183.0 | 514047 | 0.2391 | 0.7316 | 0.7199 | 0.7257 | 0.9453 |
| 0.2668 | 184.0 | 516856 | 0.2390 | 0.7318 | 0.7198 | 0.7258 | 0.9453 |
| 0.2562 | 185.0 | 519665 | 0.2388 | 0.7307 | 0.7187 | 0.7246 | 0.9451 |
| 0.2642 | 186.0 | 522474 | 0.2387 | 0.7314 | 0.7197 | 0.7255 | 0.9452 |
| 0.2688 | 187.0 | 525283 | 0.2385 | 0.7316 | 0.7205 | 0.7260 | 0.9453 |
| 0.284 | 188.0 | 528092 | 0.2384 | 0.7313 | 0.7202 | 0.7257 | 0.9453 |
| 0.2656 | 189.0 | 530901 | 0.2383 | 0.7321 | 0.7210 | 0.7265 | 0.9454 |
| 0.2724 | 190.0 | 533710 | 0.2383 | 0.7324 | 0.7215 | 0.7269 | 0.9455 |
| 0.2815 | 191.0 | 536519 | 0.2382 | 0.7322 | 0.7209 | 0.7265 | 0.9454 |
| 0.2847 | 192.0 | 539328 | 0.2380 | 0.7320 | 0.7219 | 0.7269 | 0.9455 |
| 0.2686 | 193.0 | 542137 | 0.2379 | 0.7323 | 0.7223 | 0.7273 | 0.9456 |
| 0.2641 | 194.0 | 544946 | 0.2378 | 0.7326 | 0.7220 | 0.7272 | 0.9455 |
| 0.2871 | 195.0 | 547755 | 0.2377 | 0.7319 | 0.7220 | 0.7269 | 0.9455 |
| 0.2682 | 196.0 | 550564 | 0.2376 | 0.7331 | 0.7222 | 0.7276 | 0.9456 |
| 0.2772 | 197.0 | 553373 | 0.2376 | 0.7328 | 0.7216 | 0.7272 | 0.9456 |
| 0.2781 | 198.0 | 556182 | 0.2375 | 0.7330 | 0.7218 | 0.7273 | 0.9456 |
| 0.2612 | 199.0 | 558991 | 0.2373 | 0.7328 | 0.7223 | 0.7275 | 0.9456 |
| 0.2788 | 200.0 | 561800 | 0.2372 | 0.7332 | 0.7225 | 0.7278 | 0.9457 |
| 0.2797 | 201.0 | 564609 | 0.2371 | 0.7326 | 0.7223 | 0.7274 | 0.9456 |
| 0.2641 | 202.0 | 567418 | 0.2370 | 0.7331 | 0.7226 | 0.7278 | 0.9457 |
| 0.2742 | 203.0 | 570227 | 0.2369 | 0.7334 | 0.7225 | 0.7279 | 0.9457 |
| 0.2622 | 204.0 | 573036 | 0.2369 | 0.7339 | 0.7234 | 0.7286 | 0.9458 |
| 0.2732 | 205.0 | 575845 | 0.2367 | 0.7333 | 0.7232 | 0.7282 | 0.9457 |
| 0.264 | 206.0 | 578654 | 0.2366 | 0.7334 | 0.7230 | 0.7282 | 0.9457 |
| 0.27 | 207.0 | 581463 | 0.2366 | 0.7339 | 0.7231 | 0.7284 | 0.9457 |
| 0.2808 | 208.0 | 584272 | 0.2364 | 0.7331 | 0.7227 | 0.7279 | 0.9457 |
| 0.2881 | 209.0 | 587081 | 0.2364 | 0.7333 | 0.7228 | 0.7280 | 0.9457 |
| 0.2723 | 210.0 | 589890 | 0.2364 | 0.7335 | 0.7232 | 0.7283 | 0.9457 |
| 0.2696 | 211.0 | 592699 | 0.2362 | 0.7332 | 0.7236 | 0.7284 | 0.9458 |
| 0.2729 | 212.0 | 595508 | 0.2362 | 0.7334 | 0.7236 | 0.7284 | 0.9458 |
| 0.265 | 213.0 | 598317 | 0.2361 | 0.7332 | 0.7235 | 0.7283 | 0.9458 |
| 0.2816 | 214.0 | 601126 | 0.2360 | 0.7329 | 0.7236 | 0.7283 | 0.9458 |
| 0.273 | 215.0 | 603935 | 0.2359 | 0.7339 | 0.7241 | 0.7290 | 0.9458 |
| 0.2681 | 216.0 | 606744 | 0.2359 | 0.7338 | 0.7239 | 0.7288 | 0.9458 |
| 0.2648 | 217.0 | 609553 | 0.2358 | 0.7342 | 0.7242 | 0.7292 | 0.9459 |
| 0.269 | 218.0 | 612362 | 0.2357 | 0.7341 | 0.7237 | 0.7289 | 0.9458 |
| 0.277 | 219.0 | 615171 | 0.2357 | 0.7346 | 0.7239 | 0.7292 | 0.9458 |
| 0.266 | 220.0 | 617980 | 0.2356 | 0.7344 | 0.7246 | 0.7295 | 0.9460 |
| 0.2737 | 221.0 | 620789 | 0.2355 | 0.7345 | 0.7249 | 0.7297 | 0.9460 |
| 0.2779 | 222.0 | 623598 | 0.2356 | 0.7345 | 0.7239 | 0.7292 | 0.9459 |
| 0.2834 | 223.0 | 626407 | 0.2354 | 0.7349 | 0.7249 | 0.7298 | 0.9460 |
| 0.273 | 224.0 | 629216 | 0.2354 | 0.7349 | 0.7249 | 0.7299 | 0.9460 |
| 0.2691 | 225.0 | 632025 | 0.2353 | 0.7345 | 0.7251 | 0.7298 | 0.9460 |
| 0.2696 | 226.0 | 634834 | 0.2352 | 0.7345 | 0.7254 | 0.7299 | 0.9461 |
| 0.2643 | 227.0 | 637643 | 0.2352 | 0.7350 | 0.7244 | 0.7296 | 0.9460 |
| 0.2685 | 228.0 | 640452 | 0.2351 | 0.7349 | 0.7250 | 0.7300 | 0.9461 |
| 0.2818 | 229.0 | 643261 | 0.2350 | 0.7346 | 0.7250 | 0.7298 | 0.9461 |
| 0.2848 | 230.0 | 646070 | 0.2349 | 0.7349 | 0.7255 | 0.7302 | 0.9462 |
| 0.2781 | 231.0 | 648879 | 0.2349 | 0.7349 | 0.7258 | 0.7303 | 0.9462 |
| 0.2633 | 232.0 | 651688 | 0.2348 | 0.7350 | 0.7251 | 0.7301 | 0.9461 |
| 0.2694 | 233.0 | 654497 | 0.2348 | 0.7349 | 0.7252 | 0.7300 | 0.9461 |
| 0.2595 | 234.0 | 657306 | 0.2347 | 0.7348 | 0.7251 | 0.7299 | 0.9461 |
| 0.2732 | 235.0 | 660115 | 0.2346 | 0.7347 | 0.7249 | 0.7297 | 0.9461 |
| 0.2728 | 236.0 | 662924 | 0.2346 | 0.7346 | 0.7248 | 0.7296 | 0.9461 |
| 0.2673 | 237.0 | 665733 | 0.2345 | 0.7346 | 0.7249 | 0.7297 | 0.9461 |
| 0.2694 | 238.0 | 668542 | 0.2345 | 0.7351 | 0.7251 | 0.7301 | 0.9461 |
| 0.2721 | 239.0 | 671351 | 0.2345 | 0.7353 | 0.7256 | 0.7304 | 0.9462 |
| 0.264 | 240.0 | 674160 | 0.2344 | 0.7351 | 0.7255 | 0.7303 | 0.9462 |
| 0.267 | 241.0 | 676969 | 0.2343 | 0.7352 | 0.7256 | 0.7304 | 0.9462 |
| 0.2728 | 242.0 | 679778 | 0.2343 | 0.7355 | 0.7256 | 0.7305 | 0.9462 |
| 0.2697 | 243.0 | 682587 | 0.2343 | 0.7354 | 0.7255 | 0.7304 | 0.9462 |
| 0.2688 | 244.0 | 685396 | 0.2342 | 0.7352 | 0.7253 | 0.7302 | 0.9462 |
| 0.2741 | 245.0 | 688205 | 0.2341 | 0.7354 | 0.7259 | 0.7306 | 0.9462 |
| 0.2834 | 246.0 | 691014 | 0.2341 | 0.7353 | 0.7255 | 0.7304 | 0.9462 |
| 0.2706 | 247.0 | 693823 | 0.2341 | 0.7357 | 0.7262 | 0.7309 | 0.9462 |
| 0.2686 | 248.0 | 696632 | 0.2340 | 0.7355 | 0.7258 | 0.7306 | 0.9462 |
| 0.2655 | 249.0 | 699441 | 0.2340 | 0.7350 | 0.7253 | 0.7301 | 0.9462 |
| 0.274 | 250.0 | 702250 | 0.2340 | 0.7350 | 0.7251 | 0.7300 | 0.9462 |
| 0.2728 | 251.0 | 705059 | 0.2339 | 0.7349 | 0.7252 | 0.7300 | 0.9462 |
| 0.2696 | 252.0 | 707868 | 0.2338 | 0.7354 | 0.7259 | 0.7306 | 0.9463 |
| 0.2678 | 253.0 | 710677 | 0.2338 | 0.7350 | 0.7254 | 0.7302 | 0.9462 |
| 0.279 | 254.0 | 713486 | 0.2337 | 0.7350 | 0.7256 | 0.7303 | 0.9462 |
| 0.2523 | 255.0 | 716295 | 0.2337 | 0.7350 | 0.7255 | 0.7302 | 0.9462 |
| 0.2722 | 256.0 | 719104 | 0.2336 | 0.7351 | 0.7257 | 0.7304 | 0.9462 |
| 0.2794 | 257.0 | 721913 | 0.2335 | 0.7348 | 0.7254 | 0.7301 | 0.9462 |
| 0.279 | 258.0 | 724722 | 0.2335 | 0.7345 | 0.7253 | 0.7299 | 0.9462 |
| 0.2676 | 259.0 | 727531 | 0.2335 | 0.7351 | 0.7256 | 0.7303 | 0.9463 |
| 0.261 | 260.0 | 730340 | 0.2335 | 0.7354 | 0.7259 | 0.7306 | 0.9463 |
| 0.2674 | 261.0 | 733149 | 0.2334 | 0.7350 | 0.7258 | 0.7304 | 0.9463 |
| 0.2742 | 262.0 | 735958 | 0.2334 | 0.7351 | 0.7258 | 0.7304 | 0.9463 |
| 0.2592 | 263.0 | 738767 | 0.2333 | 0.7351 | 0.7259 | 0.7305 | 0.9463 |
| 0.2729 | 264.0 | 741576 | 0.2333 | 0.7351 | 0.7259 | 0.7305 | 0.9463 |
| 0.2775 | 265.0 | 744385 | 0.2333 | 0.7355 | 0.7262 | 0.7308 | 0.9463 |
| 0.2695 | 266.0 | 747194 | 0.2333 | 0.7356 | 0.7263 | 0.7309 | 0.9463 |
| 0.2674 | 267.0 | 750003 | 0.2332 | 0.7356 | 0.7263 | 0.7310 | 0.9463 |
| 0.2522 | 268.0 | 752812 | 0.2332 | 0.7354 | 0.7260 | 0.7307 | 0.9463 |
| 0.2621 | 269.0 | 755621 | 0.2332 | 0.7361 | 0.7265 | 0.7313 | 0.9464 |
| 0.2813 | 270.0 | 758430 | 0.2331 | 0.7360 | 0.7265 | 0.7313 | 0.9464 |
| 0.2629 | 271.0 | 761239 | 0.2331 | 0.7360 | 0.7265 | 0.7313 | 0.9464 |
| 0.2762 | 272.0 | 764048 | 0.2331 | 0.7359 | 0.7263 | 0.7311 | 0.9463 |
| 0.2599 | 273.0 | 766857 | 0.2331 | 0.7362 | 0.7264 | 0.7313 | 0.9464 |
| 0.2795 | 274.0 | 769666 | 0.2331 | 0.7362 | 0.7264 | 0.7313 | 0.9464 |
| 0.2628 | 275.0 | 772475 | 0.2330 | 0.7360 | 0.7260 | 0.7309 | 0.9463 |
| 0.2762 | 276.0 | 775284 | 0.2330 | 0.7360 | 0.7261 | 0.7310 | 0.9463 |
| 0.2657 | 277.0 | 778093 | 0.2330 | 0.7361 | 0.7261 | 0.7310 | 0.9463 |
| 0.2673 | 278.0 | 780902 | 0.2330 | 0.7360 | 0.7259 | 0.7309 | 0.9463 |
| 0.2718 | 279.0 | 783711 | 0.2330 | 0.7361 | 0.7261 | 0.7311 | 0.9464 |
| 0.2631 | 280.0 | 786520 | 0.2329 | 0.7356 | 0.7257 | 0.7306 | 0.9463 |
| 0.2744 | 281.0 | 789329 | 0.2329 | 0.7359 | 0.7260 | 0.7309 | 0.9463 |
| 0.2848 | 282.0 | 792138 | 0.2329 | 0.7360 | 0.7261 | 0.7310 | 0.9464 |
| 0.271 | 283.0 | 794947 | 0.2329 | 0.7359 | 0.7262 | 0.7310 | 0.9464 |
| 0.262 | 284.0 | 797756 | 0.2328 | 0.7359 | 0.7262 | 0.7310 | 0.9464 |
| 0.2622 | 285.0 | 800565 | 0.2328 | 0.7359 | 0.7263 | 0.7310 | 0.9464 |
| 0.2679 | 286.0 | 803374 | 0.2328 | 0.7359 | 0.7263 | 0.7310 | 0.9464 |
| 0.2616 | 287.0 | 806183 | 0.2328 | 0.7359 | 0.7263 | 0.7311 | 0.9464 |
| 0.2721 | 288.0 | 808992 | 0.2328 | 0.7360 | 0.7264 | 0.7312 | 0.9464 |
| 0.2693 | 289.0 | 811801 | 0.2328 | 0.7361 | 0.7264 | 0.7312 | 0.9464 |
| 0.2645 | 290.0 | 814610 | 0.2328 | 0.7361 | 0.7264 | 0.7312 | 0.9464 |
| 0.2728 | 291.0 | 817419 | 0.2328 | 0.7361 | 0.7264 | 0.7312 | 0.9464 |
| 0.2637 | 292.0 | 820228 | 0.2327 | 0.7362 | 0.7264 | 0.7313 | 0.9464 |
| 0.2713 | 293.0 | 823037 | 0.2327 | 0.7363 | 0.7265 | 0.7314 | 0.9464 |
| 0.2623 | 294.0 | 825846 | 0.2327 | 0.7363 | 0.7265 | 0.7314 | 0.9464 |
| 0.2667 | 295.0 | 828655 | 0.2327 | 0.7363 | 0.7265 | 0.7314 | 0.9464 |
| 0.2679 | 296.0 | 831464 | 0.2327 | 0.7363 | 0.7265 | 0.7314 | 0.9464 |
| 0.2595 | 297.0 | 834273 | 0.2327 | 0.7363 | 0.7265 | 0.7314 | 0.9464 |
| 0.2609 | 298.0 | 837082 | 0.2327 | 0.7363 | 0.7265 | 0.7314 | 0.9464 |
| 0.2616 | 299.0 | 839891 | 0.2327 | 0.7363 | 0.7265 | 0.7314 | 0.9464 |
| 0.2713 | 300.0 | 842700 | 0.2327 | 0.7363 | 0.7265 | 0.7314 | 0.9464 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
vitvit/xlm-roberta-base-finetuned-ner | b7aaf8726a4a00be1acb9227cf68f5b524ce24bf | 2021-08-31T08:54:58.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
] | token-classification | false | vitvit | null | vitvit/xlm-roberta-base-finetuned-ner | 4 | null | transformers | 18,987 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: xlm-roberta-base-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.9882987313361343
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-ner
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1202
- Precision: 0.9447
- Recall: 0.9536
- F1: 0.9492
- Accuracy: 0.9883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1023 | 1.0 | 2809 | 0.0724 | 0.9338 | 0.9363 | 0.9351 | 0.9850 |
| 0.0596 | 2.0 | 5618 | 0.0760 | 0.9295 | 0.9359 | 0.9327 | 0.9848 |
| 0.0406 | 3.0 | 8427 | 0.0740 | 0.9346 | 0.9410 | 0.9378 | 0.9863 |
| 0.0365 | 4.0 | 11236 | 0.0676 | 0.9368 | 0.9490 | 0.9428 | 0.9870 |
| 0.0279 | 5.0 | 14045 | 0.0737 | 0.9453 | 0.9476 | 0.9464 | 0.9877 |
| 0.0147 | 6.0 | 16854 | 0.0812 | 0.9413 | 0.9515 | 0.9464 | 0.9878 |
| 0.0138 | 7.0 | 19663 | 0.0893 | 0.9425 | 0.9525 | 0.9475 | 0.9876 |
| 0.0158 | 8.0 | 22472 | 0.1066 | 0.9362 | 0.9464 | 0.9412 | 0.9862 |
| 0.0092 | 9.0 | 25281 | 0.1026 | 0.9391 | 0.9511 | 0.9451 | 0.9869 |
| 0.0073 | 10.0 | 28090 | 0.1001 | 0.9442 | 0.9503 | 0.9472 | 0.9879 |
| 0.0069 | 11.0 | 30899 | 0.1103 | 0.9399 | 0.9511 | 0.9455 | 0.9871 |
| 0.0073 | 12.0 | 33708 | 0.1170 | 0.9383 | 0.9481 | 0.9432 | 0.9876 |
| 0.0054 | 13.0 | 36517 | 0.1068 | 0.9407 | 0.9491 | 0.9448 | 0.9875 |
| 0.0048 | 14.0 | 39326 | 0.1096 | 0.9438 | 0.9518 | 0.9477 | 0.9879 |
| 0.0042 | 15.0 | 42135 | 0.1187 | 0.9442 | 0.9523 | 0.9483 | 0.9884 |
| 0.0037 | 16.0 | 44944 | 0.1162 | 0.9384 | 0.9521 | 0.9452 | 0.9875 |
| 0.0039 | 17.0 | 47753 | 0.1046 | 0.9435 | 0.9477 | 0.9456 | 0.9878 |
| 0.0025 | 18.0 | 50562 | 0.1063 | 0.9501 | 0.9549 | 0.9525 | 0.9889 |
| 0.0021 | 19.0 | 53371 | 0.0992 | 0.9533 | 0.9572 | 0.9553 | 0.9895 |
| 0.0019 | 20.0 | 56180 | 0.1216 | 0.9404 | 0.9524 | 0.9464 | 0.9876 |
| 0.0021 | 21.0 | 58989 | 0.1080 | 0.9430 | 0.9478 | 0.9454 | 0.9880 |
| 0.0032 | 22.0 | 61798 | 0.1109 | 0.9436 | 0.9512 | 0.9474 | 0.9881 |
| 0.0115 | 23.0 | 64607 | 0.1161 | 0.9412 | 0.9475 | 0.9443 | 0.9874 |
| 0.001 | 24.0 | 67416 | 0.1216 | 0.9446 | 0.9518 | 0.9481 | 0.9882 |
| 0.0004 | 25.0 | 70225 | 0.1145 | 0.9478 | 0.9527 | 0.9503 | 0.9888 |
| 0.0005 | 26.0 | 73034 | 0.1217 | 0.9479 | 0.9531 | 0.9505 | 0.9887 |
| 0.0007 | 27.0 | 75843 | 0.1199 | 0.9452 | 0.9561 | 0.9506 | 0.9887 |
| 0.0053 | 28.0 | 78652 | 0.1187 | 0.9440 | 0.9510 | 0.9475 | 0.9881 |
| 0.0014 | 29.0 | 81461 | 0.1207 | 0.9461 | 0.9540 | 0.9500 | 0.9884 |
| 0.0023 | 30.0 | 84270 | 0.1202 | 0.9447 | 0.9536 | 0.9492 | 0.9883 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
voidful/asr_hubert_cluster_bart_base | 45970cf5aeed8b997de7f6f66805ec263979d762 | 2021-07-19T12:21:00.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:librispeech",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"asr",
"hubert",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | voidful | null | voidful/asr_hubert_cluster_bart_base | 4 | null | transformers | 18,988 | ---
language: en
datasets:
- librispeech
tags:
- audio
- automatic-speech-recognition
- speech
- asr
- hubert
license: apache-2.0
metrics:
- wer
- cer
---
# voidful/asr_hubert_cluster_bart_base
## Usage
download file
```shell
wget https://raw.githubusercontent.com/voidful/hubert-cluster-code/main/km_feat_100_layer_20
wget https://cdn-media.huggingface.co/speech_samples/sample1.flac
```
Hubert kmeans code
```python
import joblib
import torch
from transformers import Wav2Vec2FeatureExtractor, HubertModel
import soundfile as sf
class HubertCode(object):
def __init__(self, hubert_model, km_path, km_layer):
self.processor = Wav2Vec2FeatureExtractor.from_pretrained(hubert_model)
self.model = HubertModel.from_pretrained(hubert_model)
self.km_model = joblib.load(km_path)
self.km_layer = km_layer
self.C_np = self.km_model.cluster_centers_.transpose()
self.Cnorm_np = (self.C_np ** 2).sum(0, keepdims=True)
self.C = torch.from_numpy(self.C_np)
self.Cnorm = torch.from_numpy(self.Cnorm_np)
if torch.cuda.is_available():
self.C = self.C.cuda()
self.Cnorm = self.Cnorm.cuda()
self.model = self.model.cuda()
def __call__(self, filepath, sampling_rate=None):
speech, sr = sf.read(filepath)
input_values = self.processor(speech, return_tensors="pt", sampling_rate=sr).input_values
if torch.cuda.is_available():
input_values = input_values.cuda()
hidden_states = self.model(input_values, output_hidden_states=True).hidden_states
x = hidden_states[self.km_layer].squeeze()
dist = (
x.pow(2).sum(1, keepdim=True)
- 2 * torch.matmul(x, self.C)
+ self.Cnorm
)
return dist.argmin(dim=1).cpu().numpy()
```
input
```python
hc = HubertCode("facebook/hubert-large-ll60k", './km_feat_100_layer_20', 20)
voice_ids = hc('./sample1.flac')
```
bart model
````python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("voidful/asr_hubert_cluster_bart_base")
model = AutoModelForSeq2SeqLM.from_pretrained("voidful/asr_hubert_cluster_bart_base")
````
generate output
```python
gen_output = model.generate(input_ids=tokenizer("".join([f":vtok{i}:" for i in voice_ids]),return_tensors='pt').input_ids,max_length=1024)
print(tokenizer.decode(gen_output[0], skip_special_tokens=True))
```
## Result
`going along slushy country roads and speaking to damp audience in drifty school rooms day after day for a fortnight he'll have to put in an appearance at some place of worship on sunday morning and he can come to ask immediately afterwards`
|
vovaf709/bert_classifier | 5881f7431adbc1e1828ecfd802147baf82c56e5e | 2021-12-17T16:32:36.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | vovaf709 | null | vovaf709/bert_classifier | 4 | null | transformers | 18,989 | Entry not found |
w11wo/sundanese-gpt2-base-emotion-classifier | 6a1fcca05096085980f0b9ea9b2bb3a6b4294217 | 2022-02-26T13:15:23.000Z | [
"pytorch",
"tf",
"gpt2",
"text-classification",
"su",
"transformers",
"sundanese-gpt2-base-emotion-classifier",
"license:mit"
] | text-classification | false | w11wo | null | w11wo/sundanese-gpt2-base-emotion-classifier | 4 | null | transformers | 18,990 | ---
language: su
tags:
- sundanese-gpt2-base-emotion-classifier
license: mit
widget:
- text: "Wah, éta gélo, keren pisan!"
---
## Sundanese GPT-2 Base Emotion Classifier
Sundanese GPT-2 Base Emotion Classifier is an emotion-text-classification model based on the [OpenAI GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model. The model was originally the pre-trained [Sundanese GPT-2 Base](https://hf.co/w11wo/sundanese-gpt2-base) model, which is then fine-tuned on the [Sundanese Twitter dataset](https://github.com/virgantara/sundanese-twitter-dataset), consisting of Sundanese tweets.
10% of the dataset is kept for evaluation purposes. After training, the model achieved an evaluation accuracy of 94.84% and F1-macro of 94.75%.
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ---------------------------------------- | ------- | ---------- | ------------------------------- |
| `sundanese-gpt2-base-emotion-classifier` | 124M | GPT-2 Base | Sundanese Twitter dataset |
## Evaluation Results
The model was trained for 10 epochs and the best model was loaded at the end.
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
| ----- | ------------- | --------------- | -------- | -------- | --------- | -------- |
| 1 | 0.819200 | 0.331463 | 0.880952 | 0.878694 | 0.883126 | 0.879304 |
| 2 | 0.140300 | 0.309764 | 0.900794 | 0.899025 | 0.906819 | 0.898632 |
| 3 | 0.018600 | 0.324491 | 0.948413 | 0.947525 | 0.948037 | 0.948153 |
| 4 | 0.004500 | 0.335100 | 0.932540 | 0.931648 | 0.934629 | 0.931617 |
| 5 | 0.000200 | 0.392145 | 0.932540 | 0.932281 | 0.935075 | 0.932527 |
| 6 | 0.000000 | 0.371689 | 0.932540 | 0.931760 | 0.934925 | 0.931840 |
| 7 | 0.000000 | 0.368086 | 0.944444 | 0.943652 | 0.945875 | 0.943843 |
| 8 | 0.000000 | 0.367550 | 0.944444 | 0.943652 | 0.945875 | 0.943843 |
| 9 | 0.000000 | 0.368033 | 0.944444 | 0.943652 | 0.945875 | 0.943843 |
| 10 | 0.000000 | 0.368391 | 0.944444 | 0.943652 | 0.945875 | 0.943843 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "sundanese-gpt2-base-emotion-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Wah, éta gélo, keren pisan!")
```
## Disclaimer
Do consider the biases which come from both the pre-trained RoBERTa model and the Sundanese Twitter dataset that may be carried over into the results of this model.
## Author
Sundanese GPT-2 Base Emotion Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation Information
```bib
@article{rs-907893,
author = {Wongso, Wilson
and Lucky, Henry
and Suhartono, Derwin},
journal = {Journal of Big Data},
year = {2022},
month = {Feb},
day = {26},
abstract = {The Sundanese language has over 32 million speakers worldwide, but the language has reaped little to no benefits from the recent advances in natural language understanding. Like other low-resource languages, the only alternative is to fine-tune existing multilingual models. In this paper, we pre-trained three monolingual Transformer-based language models on Sundanese data. When evaluated on a downstream text classification task, we found that most of our monolingual models outperformed larger multilingual models despite the smaller overall pre-training data. In the subsequent analyses, our models benefited strongly from the Sundanese pre-training corpus size and do not exhibit socially biased behavior. We released our models for other researchers and practitioners to use.},
issn = {2693-5015},
doi = {10.21203/rs.3.rs-907893/v1},
url = {https://doi.org/10.21203/rs.3.rs-907893/v1}
}
``` |
w11wo/wav2vec2-xls-r-300m-zh-HK-v2 | ce79a8ce8e224a16f92bab7891442eac24f46b78 | 2022-03-23T18:27:41.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"zh-HK",
"dataset:common_voice",
"arxiv:2111.09296",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | w11wo | null | w11wo/wav2vec2-xls-r-300m-zh-HK-v2 | 4 | null | transformers | 18,991 | ---
language: zh-HK
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: Wav2Vec2 XLS-R 300M Cantonese (zh-HK)
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 31.73
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 23.11
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 23.02
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 56.6
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 55.11
---
# Wav2Vec2 XLS-R 300M Cantonese (zh-HK)
Wav2Vec2 XLS-R 300M Cantonese (zh-HK) is an automatic speech recognition model based on the [XLS-R](https://arxiv.org/abs/2111.09296) architecture. This model is a fine-tuned version of [Wav2Vec2-XLS-R-300M](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the `zh-HK` subset of the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
This model was trained using HuggingFace's PyTorch framework and is part of the [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by HuggingFace. All training was done on a Tesla V100, sponsored by OVH.
All necessary scripts used for training could be found in the [Files and versions](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-zh-HK-v2/tree/main) tab, as well as the [Training metrics](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-zh-HK-v2/tensorboard) logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ------------------------------ | ------- | ----- | ------------------------------- |
| `wav2vec2-xls-r-300m-zh-HK-v2` | 300M | XLS-R | `Common Voice zh-HK` Dataset |
## Evaluation Results
The model achieves the following results on evaluation:
| Dataset | Loss | CER |
| -------------------------------- | ------ | ------ |
| `Common Voice` | 0.8089 | 31.73% |
| `Common Voice 7` | N/A | 23.11% |
| `Common Voice 8` | N/A | 23.02% |
| `Robust Speech Event - Dev Data` | N/A | 56.60% |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- `learning_rate`: 0.0001
- `train_batch_size`: 8
- `eval_batch_size`: 8
- `seed`: 42
- `gradient_accumulation_steps`: 4
- `total_train_batch_size`: 32
- `optimizer`: Adam with `betas=(0.9, 0.999)` and `epsilon=1e-08`
- `lr_scheduler_type`: linear
- `lr_scheduler_warmup_steps`: 2000
- `num_epochs`: 100.0
- `mixed_precision_training`: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
| :-----------: | :---: | :---: | :-------------: | :----: | :----: |
| 69.8341 | 1.34 | 500 | 80.0722 | 1.0 | 1.0 |
| 6.6418 | 2.68 | 1000 | 6.6346 | 1.0 | 1.0 |
| 6.2419 | 4.02 | 1500 | 6.2909 | 1.0 | 1.0 |
| 6.0813 | 5.36 | 2000 | 6.1150 | 1.0 | 1.0 |
| 5.9677 | 6.7 | 2500 | 6.0301 | 1.1386 | 1.0028 |
| 5.9296 | 8.04 | 3000 | 5.8975 | 1.2113 | 1.0058 |
| 5.6434 | 9.38 | 3500 | 5.5404 | 2.1624 | 1.0171 |
| 5.1974 | 10.72 | 4000 | 4.5440 | 2.1702 | 0.9366 |
| 4.3601 | 12.06 | 4500 | 3.3839 | 2.2464 | 0.8998 |
| 3.9321 | 13.4 | 5000 | 2.8785 | 2.3097 | 0.8400 |
| 3.6462 | 14.74 | 5500 | 2.5108 | 1.9623 | 0.6663 |
| 3.5156 | 16.09 | 6000 | 2.2790 | 1.6479 | 0.5706 |
| 3.32 | 17.43 | 6500 | 2.1450 | 1.8337 | 0.6244 |
| 3.1918 | 18.77 | 7000 | 1.8536 | 1.9394 | 0.6017 |
| 3.1139 | 20.11 | 7500 | 1.7205 | 1.9112 | 0.5638 |
| 2.8995 | 21.45 | 8000 | 1.5478 | 1.0624 | 0.3250 |
| 2.7572 | 22.79 | 8500 | 1.4068 | 1.1412 | 0.3367 |
| 2.6881 | 24.13 | 9000 | 1.3312 | 2.0100 | 0.5683 |
| 2.5993 | 25.47 | 9500 | 1.2553 | 2.0039 | 0.6450 |
| 2.5304 | 26.81 | 10000 | 1.2422 | 2.0394 | 0.5789 |
| 2.4352 | 28.15 | 10500 | 1.1582 | 1.9970 | 0.5507 |
| 2.3795 | 29.49 | 11000 | 1.1160 | 1.8255 | 0.4844 |
| 2.3287 | 30.83 | 11500 | 1.0775 | 1.4123 | 0.3780 |
| 2.2622 | 32.17 | 12000 | 1.0704 | 1.7445 | 0.4894 |
| 2.2225 | 33.51 | 12500 | 1.0272 | 1.7237 | 0.5058 |
| 2.1843 | 34.85 | 13000 | 0.9756 | 1.8042 | 0.5028 |
| 2.1 | 36.19 | 13500 | 0.9527 | 1.8909 | 0.6055 |
| 2.0741 | 37.53 | 14000 | 0.9418 | 1.9026 | 0.5880 |
| 2.0179 | 38.87 | 14500 | 0.9363 | 1.7977 | 0.5246 |
| 2.0615 | 40.21 | 15000 | 0.9635 | 1.8112 | 0.5599 |
| 1.9448 | 41.55 | 15500 | 0.9249 | 1.7250 | 0.4914 |
| 1.8966 | 42.89 | 16000 | 0.9023 | 1.5829 | 0.4319 |
| 1.8662 | 44.24 | 16500 | 0.9002 | 1.4833 | 0.4230 |
| 1.8136 | 45.58 | 17000 | 0.9076 | 1.1828 | 0.2987 |
| 1.7908 | 46.92 | 17500 | 0.8774 | 1.5773 | 0.4258 |
| 1.7354 | 48.26 | 18000 | 0.8727 | 1.5037 | 0.4024 |
| 1.6739 | 49.6 | 18500 | 0.8636 | 1.1239 | 0.2789 |
| 1.6457 | 50.94 | 19000 | 0.8516 | 1.2269 | 0.3104 |
| 1.5847 | 52.28 | 19500 | 0.8399 | 1.3309 | 0.3360 |
| 1.5971 | 53.62 | 20000 | 0.8441 | 1.3153 | 0.3335 |
| 1.602 | 54.96 | 20500 | 0.8590 | 1.2932 | 0.3433 |
| 1.5063 | 56.3 | 21000 | 0.8334 | 1.1312 | 0.2875 |
| 1.4631 | 57.64 | 21500 | 0.8474 | 1.1698 | 0.2999 |
| 1.4997 | 58.98 | 22000 | 0.8638 | 1.4279 | 0.3854 |
| 1.4301 | 60.32 | 22500 | 0.8550 | 1.2737 | 0.3300 |
| 1.3798 | 61.66 | 23000 | 0.8266 | 1.1802 | 0.2934 |
| 1.3454 | 63.0 | 23500 | 0.8235 | 1.3816 | 0.3711 |
| 1.3678 | 64.34 | 24000 | 0.8550 | 1.6427 | 0.5035 |
| 1.3761 | 65.68 | 24500 | 0.8510 | 1.6709 | 0.4907 |
| 1.2668 | 67.02 | 25000 | 0.8515 | 1.5842 | 0.4505 |
| 1.2835 | 68.36 | 25500 | 0.8283 | 1.5353 | 0.4221 |
| 1.2961 | 69.7 | 26000 | 0.8339 | 1.5743 | 0.4369 |
| 1.2656 | 71.05 | 26500 | 0.8331 | 1.5331 | 0.4217 |
| 1.2556 | 72.39 | 27000 | 0.8242 | 1.4708 | 0.4109 |
| 1.2043 | 73.73 | 27500 | 0.8245 | 1.4469 | 0.4031 |
| 1.2722 | 75.07 | 28000 | 0.8202 | 1.4924 | 0.4096 |
| 1.202 | 76.41 | 28500 | 0.8290 | 1.3807 | 0.3719 |
| 1.1679 | 77.75 | 29000 | 0.8195 | 1.4097 | 0.3749 |
| 1.1967 | 79.09 | 29500 | 0.8059 | 1.2074 | 0.3077 |
| 1.1241 | 80.43 | 30000 | 0.8137 | 1.2451 | 0.3270 |
| 1.1414 | 81.77 | 30500 | 0.8117 | 1.2031 | 0.3121 |
| 1.132 | 83.11 | 31000 | 0.8234 | 1.4266 | 0.3901 |
| 1.0982 | 84.45 | 31500 | 0.8064 | 1.3712 | 0.3607 |
| 1.0797 | 85.79 | 32000 | 0.8167 | 1.3356 | 0.3562 |
| 1.0119 | 87.13 | 32500 | 0.8215 | 1.2754 | 0.3268 |
| 1.0216 | 88.47 | 33000 | 0.8163 | 1.2512 | 0.3184 |
| 1.0375 | 89.81 | 33500 | 0.8137 | 1.2685 | 0.3290 |
| 0.9794 | 91.15 | 34000 | 0.8220 | 1.2724 | 0.3255 |
| 1.0207 | 92.49 | 34500 | 0.8165 | 1.2906 | 0.3361 |
| 1.0169 | 93.83 | 35000 | 0.8153 | 1.2819 | 0.3305 |
| 1.0127 | 95.17 | 35500 | 0.8187 | 1.2832 | 0.3252 |
| 0.9978 | 96.51 | 36000 | 0.8111 | 1.2612 | 0.3210 |
| 0.9923 | 97.85 | 36500 | 0.8076 | 1.2278 | 0.3122 |
| 1.0451 | 99.2 | 37000 | 0.8086 | 1.2451 | 0.3156 |
## Disclaimer
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
## Authors
Wav2Vec2 XLS-R 300M Cantonese (zh-HK) was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on OVH Cloud.
## Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
wangsheng/autonlp-poi_train-31237266 | b32f7c00fdad728eb927bdf29c644be73bd607a7 | 2021-11-10T14:09:14.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"dataset:wangsheng/autonlp-data-poi_train",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | wangsheng | null | wangsheng/autonlp-poi_train-31237266 | 4 | null | transformers | 18,992 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- wangsheng/autonlp-data-poi_train
co2_eq_emissions: 390.39411176775826
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 31237266
- CO2 Emissions (in grams): 390.39411176775826
## Validation Metrics
- Loss: 0.1643059253692627
- Accuracy: 0.9379398019660155
- Precision: 0.7467491278147795
- Recall: 0.7158710854363028
- AUC: 0.9631629384458238
- F1: 0.7309841664079478
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/wangsheng/autonlp-poi_train-31237266
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("wangsheng/autonlp-poi_train-31237266", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("wangsheng/autonlp-poi_train-31237266", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
wangyuwei/bert_finetuning_test | 125a1a3cac0b079f998615c21098f0c6c578d8de | 2021-05-20T09:06:29.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | wangyuwei | null | wangyuwei/bert_finetuning_test | 4 | null | transformers | 18,993 | Entry not found |
wgpubs/session-4-imdb-model | f8a8b2fd701b5031f36fd42ef3cd710280a875da | 2021-08-01T17:31:32.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | wgpubs | null | wgpubs/session-4-imdb-model | 4 | 1 | transformers | 18,994 | Entry not found |
woosukji/kogpt2-resume | e2606bad32492e753e82aa9315df0ea6c695b85a | 2021-10-16T11:34:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | woosukji | null | woosukji/kogpt2-resume | 4 | null | transformers | 18,995 | Entry not found |
wzhouad/prix-lm | 1f85d7fa0b42f9adc84fc1933672322845e4dca1 | 2021-11-16T21:41:03.000Z | [
"pytorch",
"xlm-roberta",
"text-generation",
"transformers"
] | text-generation | false | wzhouad | null | wzhouad/prix-lm | 4 | null | transformers | 18,996 | Entry not found |
xysmalobia/test-trainer | 1b8003cd65857415e1914d800b132cd1e945d302 | 2021-11-14T00:52:38.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | xysmalobia | null | xysmalobia/test-trainer | 4 | null | transformers | 18,997 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: test-trainer
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8504901960784313
- name: F1
type: f1
value: 0.893542757417103
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5802
- Accuracy: 0.8505
- F1: 0.8935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.4443 | 0.8039 | 0.8485 |
| 0.5584 | 2.0 | 918 | 0.3841 | 0.8431 | 0.8810 |
| 0.3941 | 3.0 | 1377 | 0.5802 | 0.8505 | 0.8935 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
ybybybybybybyb/autonlp-revanalysis-6711455 | d57c446a8e31685bdcea600200552048c9e43969 | 2021-08-04T04:38:05.000Z | [
"pytorch",
"funnel",
"text-classification",
"ko",
"dataset:ybybybybybybyb/autonlp-data-revanalysis",
"transformers",
"autonlp"
] | text-classification | false | ybybybybybybyb | null | ybybybybybybyb/autonlp-revanalysis-6711455 | 4 | null | transformers | 18,998 | ---
tags: autonlp
language: ko
widget:
- text: "I love AutoNLP 🤗"
datasets:
- ybybybybybybyb/autonlp-data-revanalysis
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 6711455
## Validation Metrics
- Loss: 0.8241586089134216
- Accuracy: 0.7835820895522388
- Macro F1: 0.5297383029341792
- Micro F1: 0.783582089552239
- Weighted F1: 0.7130091019920225
- Macro Precision: 0.48787061994609165
- Micro Precision: 0.7835820895522388
- Weighted Precision: 0.6541416904694856
- Macro Recall: 0.5795454545454546
- Micro Recall: 0.7835820895522388
- Weighted Recall: 0.7835820895522388
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/ybybybybybybyb/autonlp-revanalysis-6711455
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ybybybybybybyb/autonlp-revanalysis-6711455", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ybybybybybybyb/autonlp-revanalysis-6711455", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
yobi/klue-roberta-base-sts | e7ace3c45c785b10ccd43a16be02f1b2e464a68c | 2021-07-06T11:36:08.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | yobi | null | yobi/klue-roberta-base-sts | 4 | null | sentence-transformers | 18,999 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
## Usage
```
from sentence_transformers import SentenceTransformer, models
embedding_model = models.Transformer("yobi/klue-roberta-base-sts")
pooling_model = models.Pooling(
embedding_model.get_word_embedding_dimension(),
pooling_mode_mean_tokens=True,
)
model = SentenceTransformer(modules=[embedding_model, pooling_model])
model.encode("안녕하세요.", convert_to_tensor=True)
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.