modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
toasterboy/TESDFEEEE | d4ddfadf3b3d5cd91926ac5b320e497fbc4467fd | 2021-12-24T15:14:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | toasterboy | null | toasterboy/TESDFEEEE | 2 | null | transformers | 24,800 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: TESDFEEEE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TESDFEEEE
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 421 | 0.3940 | 0.8306 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
toasthans/Facebook_Mit_HPS | c62bfd18defbceead9817540c50a2faf14ec8b3b | 2021-12-23T17:47:19.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | toasthans | null | toasthans/Facebook_Mit_HPS | 2 | null | transformers | 24,801 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Facebook_Mit_HPS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Facebook_Mit_HPS
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3681
- Accuracy: 0.9281
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.906763521176542e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 30
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 292 | 0.2394 | 0.9238 |
| 0.2248 | 2.0 | 584 | 0.3112 | 0.9178 |
| 0.2248 | 3.0 | 876 | 0.3681 | 0.9281 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
toasthans/Facebook_Mit_HPS_5_Epoch | 288ac05c84db281d23e33d6b7f8c95f17b457a41 | 2021-12-23T08:27:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | toasthans | null | toasthans/Facebook_Mit_HPS_5_Epoch | 2 | null | transformers | 24,802 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Facebook_Mit_HPS_5_Epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Facebook_Mit_HPS_5_Epoch
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4774
- Accuracy: 0.9315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.546392051994155e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 5
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 292 | 0.2181 | 0.9264 |
| 0.2411 | 2.0 | 584 | 0.2571 | 0.9289 |
| 0.2411 | 3.0 | 876 | 0.5712 | 0.8947 |
| 0.0558 | 4.0 | 1168 | 0.4675 | 0.9332 |
| 0.0558 | 5.0 | 1460 | 0.4774 | 0.9315 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
toasthans/Facebook_Ohne_HPS | 64991cbe31894f319307b0ad14411eb5d012117e | 2021-12-23T15:11:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | toasthans | null | toasthans/Facebook_Ohne_HPS | 2 | null | transformers | 24,803 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Facebook_Ohne_HPS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Facebook_Ohne_HPS
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4648
- Accuracy: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 292 | 0.2030 | 0.9272 |
| 0.2315 | 2.0 | 584 | 0.2811 | 0.9272 |
| 0.2315 | 3.0 | 876 | 0.5461 | 0.8955 |
| 0.0566 | 4.0 | 1168 | 0.4648 | 0.9255 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
toasthans/Twitter_Mit_HPSearch | 6a6bf30e104f90ffd247a8428a7e9a3f6c1dbf84 | 2021-12-24T15:52:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | toasthans | null | toasthans/Twitter_Mit_HPSearch | 2 | null | transformers | 24,804 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Twitter_Mit_HPSearch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Twitter_Mit_HPSearch
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8389
- Accuracy: 0.8442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.9771872814096894e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 23
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 421 | 0.3838 | 0.8353 |
| 0.4401 | 2.0 | 842 | 0.4340 | 0.8424 |
| 0.2042 | 3.0 | 1263 | 0.6857 | 0.8508 |
| 0.0774 | 4.0 | 1684 | 0.8389 | 0.8442 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
toasthans/Twitter_Ohne_HPSearch | 0160947ce276247f7527683f319a235a6020ebca | 2021-12-24T10:20:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | toasthans | null | toasthans/Twitter_Ohne_HPSearch | 2 | null | transformers | 24,805 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Twitter_Ohne_HPSearch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Twitter_Ohne_HPSearch
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0262
- Accuracy: 0.8300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 421 | 0.4296 | 0.8181 |
| 0.4451 | 2.0 | 842 | 0.4889 | 0.8240 |
| 0.1761 | 3.0 | 1263 | 0.9503 | 0.8103 |
| 0.0486 | 4.0 | 1684 | 1.0262 | 0.8300 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
toastynews/electra-hongkongese-small-discriminator | 019ac789367735fc9832309fb1d72146a8a254e1 | 2020-07-07T17:55:30.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"yue",
"transformers",
"license:apache-2.0"
] | null | false | toastynews | null | toastynews/electra-hongkongese-small-discriminator | 2 | null | transformers | 24,806 | ---
language: yue
license: apache-2.0
metrics:
- DRCD
- openrice-senti
- lihkg-cat
- wordshk-sem
---
# ELECTRA Hongkongese Small
## Model description
ELECTRA trained exclusively with data from Hong Kong. A signaficant amount of Hongkongese/Cantonese/Yue is included in the training data.
## Intended uses & limitations
This model is an alternative to Chinese models. It may offer better performance for tasks catering to the langauge usage of Hong Kongers. Yue Wikipedia is used which is much smaller than Chinese Wikipedia; this model will lack the breath of knowledge compared to other Chinese models.
#### How to use
This is the small model trained from the official repo. Further finetuning will be needed for use on downstream tasks. Other model sizes are also available.
#### Limitations and bias
The training data consists of mostly news articles and blogs. There is probably a bias towards formal language usage.
## Training data
The following is the list of data sources. Total characters is about 507M.
| Data | % |
| ------------------------------------------------- | --: |
| News Articles / Blogs | 58% |
| Yue Wikipedia / EVCHK | 18% |
| Restaurant Reviews | 12% |
| Forum Threads | 12% |
| Online Fiction | 1% |
The following is the distribution of different languages within the corpus.
| Language | % |
| ------------------------------------------------- | --: |
| Standard Chinese | 62% |
| Hongkongese | 30% |
| English | 8% |
## Training procedure
Model was trained on a single TPUv3 from the official repo with the default parameters.
| Parameter | Value |
| ------------------------------------------------ | ----: |
| Batch Size | 384 |
| Max Sequence Size | 512 |
| Generator Hidden Size | 1.0 |
| Vocab Size | 30000 |
*Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC)*
## Eval results
Average evaluation task results over 10 runs. Comparison using the original repo model and code. Chinese models are available from [Joint Laboratory of HIT and iFLYTEK Research (HFL)](https://huggingface.co/hfl)
| Model | DRCD (EM/F1) | openrice-senti | lihkg-cat | wordshk-sem |
|:-----------:|:------------:|:--------------:|:---------:|:-----------:|
| Chinese | 78.5 / 85.6 | 77.9 | 63.7 | 79.2 |
| Hongkongese | 76.7 / 84.4 | 79.0 | 62.6 | 80.0 |
|
tobiaslee/roberta-large-qa-suffix-defteval-t6-st1 | b70a5c4a045eb095a8aea13c2d0c9e8834f330de | 2021-06-27T08:25:17.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | tobiaslee | null | tobiaslee/roberta-large-qa-suffix-defteval-t6-st1 | 2 | null | transformers | 24,807 | Entry not found |
tomascufaro/wav2vec2-large-xls-r-300m-spanish-custom | 1acdc923fdc16018b71b045af920ec23ac4abf80 | 2022-01-27T15:27:27.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | tomascufaro | null | tomascufaro/wav2vec2-large-xls-r-300m-spanish-custom | 2 | null | transformers | 24,808 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-spanish-custom
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-spanish-custom
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4426
- Wer: 0.2117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.2307 | 0.4 | 400 | 1.4431 | 0.9299 |
| 0.7066 | 0.79 | 800 | 0.5928 | 0.4836 |
| 0.4397 | 1.19 | 1200 | 0.4341 | 0.3730 |
| 0.3889 | 1.58 | 1600 | 0.4063 | 0.3499 |
| 0.3607 | 1.98 | 2000 | 0.3834 | 0.3235 |
| 0.2866 | 2.37 | 2400 | 0.3885 | 0.3163 |
| 0.2833 | 2.77 | 2800 | 0.3765 | 0.3140 |
| 0.2692 | 3.17 | 3200 | 0.3849 | 0.3132 |
| 0.2435 | 3.56 | 3600 | 0.3779 | 0.2984 |
| 0.2404 | 3.96 | 4000 | 0.3756 | 0.2934 |
| 0.2153 | 4.35 | 4400 | 0.3770 | 0.3075 |
| 0.2087 | 4.75 | 4800 | 0.3819 | 0.3022 |
| 0.1999 | 5.14 | 5200 | 0.3756 | 0.2959 |
| 0.1838 | 5.54 | 5600 | 0.3827 | 0.2858 |
| 0.1892 | 5.93 | 6000 | 0.3714 | 0.2999 |
| 0.1655 | 6.33 | 6400 | 0.3814 | 0.2812 |
| 0.1649 | 6.73 | 6800 | 0.3685 | 0.2727 |
| 0.1668 | 7.12 | 7200 | 0.3832 | 0.2825 |
| 0.1487 | 7.52 | 7600 | 0.3848 | 0.2788 |
| 0.152 | 7.91 | 8000 | 0.3810 | 0.2787 |
| 0.143 | 8.31 | 8400 | 0.3885 | 0.2856 |
| 0.1353 | 8.7 | 8800 | 0.4103 | 0.2827 |
| 0.1386 | 9.1 | 9200 | 0.4142 | 0.2874 |
| 0.1222 | 9.5 | 9600 | 0.3983 | 0.2830 |
| 0.1288 | 9.89 | 10000 | 0.4179 | 0.2781 |
| 0.1199 | 10.29 | 10400 | 0.4035 | 0.2789 |
| 0.1196 | 10.68 | 10800 | 0.4043 | 0.2746 |
| 0.1169 | 11.08 | 11200 | 0.4105 | 0.2753 |
| 0.1076 | 11.47 | 11600 | 0.4298 | 0.2686 |
| 0.1124 | 11.87 | 12000 | 0.4025 | 0.2704 |
| 0.1043 | 12.26 | 12400 | 0.4209 | 0.2659 |
| 0.0976 | 12.66 | 12800 | 0.4070 | 0.2672 |
| 0.1012 | 13.06 | 13200 | 0.4161 | 0.2720 |
| 0.0872 | 13.45 | 13600 | 0.4245 | 0.2697 |
| 0.0933 | 13.85 | 14000 | 0.4295 | 0.2684 |
| 0.0881 | 14.24 | 14400 | 0.4011 | 0.2650 |
| 0.0848 | 14.64 | 14800 | 0.3991 | 0.2675 |
| 0.0852 | 15.03 | 15200 | 0.4166 | 0.2617 |
| 0.0825 | 15.43 | 15600 | 0.4188 | 0.2639 |
| 0.081 | 15.83 | 16000 | 0.4181 | 0.2547 |
| 0.0753 | 16.22 | 16400 | 0.4103 | 0.2560 |
| 0.0747 | 16.62 | 16800 | 0.4017 | 0.2498 |
| 0.0761 | 17.01 | 17200 | 0.4159 | 0.2563 |
| 0.0711 | 17.41 | 17600 | 0.4112 | 0.2603 |
| 0.0698 | 17.8 | 18000 | 0.4335 | 0.2529 |
| 0.073 | 18.2 | 18400 | 0.4120 | 0.2512 |
| 0.0665 | 18.6 | 18800 | 0.4335 | 0.2496 |
| 0.0657 | 18.99 | 19200 | 0.4143 | 0.2468 |
| 0.0617 | 19.39 | 19600 | 0.4339 | 0.2435 |
| 0.06 | 19.78 | 20000 | 0.4179 | 0.2438 |
| 0.0613 | 20.18 | 20400 | 0.4251 | 0.2393 |
| 0.0583 | 20.57 | 20800 | 0.4347 | 0.2422 |
| 0.0562 | 20.97 | 21200 | 0.4246 | 0.2377 |
| 0.053 | 21.36 | 21600 | 0.4198 | 0.2338 |
| 0.0525 | 21.76 | 22000 | 0.4511 | 0.2427 |
| 0.0499 | 22.16 | 22400 | 0.4482 | 0.2353 |
| 0.0475 | 22.55 | 22800 | 0.4449 | 0.2329 |
| 0.0465 | 22.95 | 23200 | 0.4364 | 0.2320 |
| 0.0443 | 23.34 | 23600 | 0.4481 | 0.2304 |
| 0.0458 | 23.74 | 24000 | 0.4442 | 0.2267 |
| 0.0453 | 24.13 | 24400 | 0.4402 | 0.2261 |
| 0.0426 | 24.53 | 24800 | 0.4262 | 0.2232 |
| 0.0431 | 24.93 | 25200 | 0.4251 | 0.2210 |
| 0.0389 | 25.32 | 25600 | 0.4455 | 0.2232 |
| 0.039 | 25.72 | 26000 | 0.4372 | 0.2236 |
| 0.0378 | 26.11 | 26400 | 0.4236 | 0.2212 |
| 0.0348 | 26.51 | 26800 | 0.4359 | 0.2204 |
| 0.0361 | 26.9 | 27200 | 0.4248 | 0.2192 |
| 0.0356 | 27.3 | 27600 | 0.4397 | 0.2184 |
| 0.0325 | 27.7 | 28000 | 0.4367 | 0.2181 |
| 0.0313 | 28.09 | 28400 | 0.4477 | 0.2136 |
| 0.0306 | 28.49 | 28800 | 0.4533 | 0.2135 |
| 0.0314 | 28.88 | 29200 | 0.4410 | 0.2136 |
| 0.0307 | 29.28 | 29600 | 0.4457 | 0.2113 |
| 0.0309 | 29.67 | 30000 | 0.4426 | 0.2117 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
tonyalves/wav2vec2-large-xls-r-300m-pt-colab | dbb08e33b5645eb71cab1bd111517c860f920fe8 | 2022-01-09T17:40:58.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | tonyalves | null | tonyalves/wav2vec2-large-xls-r-300m-pt-colab | 2 | null | transformers | 24,809 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-pt-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-pt-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3637
- Wer: 0.2982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.591 | 1.15 | 400 | 0.9128 | 0.6517 |
| 0.5049 | 2.31 | 800 | 0.4596 | 0.4437 |
| 0.2871 | 3.46 | 1200 | 0.3964 | 0.3905 |
| 0.2077 | 4.61 | 1600 | 0.3958 | 0.3744 |
| 0.1695 | 5.76 | 2000 | 0.4040 | 0.3720 |
| 0.1478 | 6.92 | 2400 | 0.3866 | 0.3651 |
| 0.1282 | 8.07 | 2800 | 0.3987 | 0.3674 |
| 0.1134 | 9.22 | 3200 | 0.4128 | 0.3688 |
| 0.1048 | 10.37 | 3600 | 0.3928 | 0.3561 |
| 0.0938 | 11.53 | 4000 | 0.4048 | 0.3619 |
| 0.0848 | 12.68 | 4400 | 0.4229 | 0.3555 |
| 0.0798 | 13.83 | 4800 | 0.3974 | 0.3468 |
| 0.0688 | 14.98 | 5200 | 0.3870 | 0.3503 |
| 0.0658 | 16.14 | 5600 | 0.3875 | 0.3351 |
| 0.061 | 17.29 | 6000 | 0.4133 | 0.3417 |
| 0.0569 | 18.44 | 6400 | 0.3915 | 0.3414 |
| 0.0526 | 19.6 | 6800 | 0.3957 | 0.3231 |
| 0.0468 | 20.75 | 7200 | 0.4110 | 0.3301 |
| 0.0407 | 21.9 | 7600 | 0.3866 | 0.3186 |
| 0.0384 | 23.05 | 8000 | 0.3976 | 0.3193 |
| 0.0363 | 24.21 | 8400 | 0.3910 | 0.3177 |
| 0.0313 | 25.36 | 8800 | 0.3656 | 0.3109 |
| 0.0293 | 26.51 | 9200 | 0.3712 | 0.3092 |
| 0.0277 | 27.66 | 9600 | 0.3613 | 0.3054 |
| 0.0249 | 28.82 | 10000 | 0.3783 | 0.3015 |
| 0.0234 | 29.97 | 10400 | 0.3637 | 0.2982 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
torque29/DialoGPT-small-harrypotter | f7993560524bd5df11a160e8b06dba8536339d3e | 2021-10-27T10:18:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | torque29 | null | torque29/DialoGPT-small-harrypotter | 2 | null | transformers | 24,810 | ----
tags:
- conversational
---
# Harry Potter DialoGPT Model |
tosin/pcl_22 | bc3a45b6f61917346ae93d4d9d98c63b7fbb8b11 | 2022-02-18T12:33:52.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:PCL",
"transformers",
"text classification",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | tosin | null | tosin/pcl_22 | 2 | null | transformers | 24,811 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
language:
- en
license: cc-by-4.0
tags:
- text classification
- transformers
datasets:
- PCL
metrics:
- F1
inference: false
---
## T5Base-PCL
This is a fine-tuned model of T5 (base) on the patronizing and condenscending language (PCL) dataset by Pérez-Almendros et al (2020) used for Task 4 competition of SemEval-2022.
It is intended to be used as a classification model for identifying PCL (0 - neg; 1 - pos). The task prefix we used for the T5 model is 'classification: '.
The dataset it's trained on is limited in scope, as it covers only some news texts covering about 20 English-speaking countries.
The macro F1 score achieved on the test set, based on the official evaluation, is 0.5452.
More information about the original pre-trained model can be found [here](https://huggingface.co/t5-base)
* Classification examples:
|Prediction | Input |
|---------|------------|
|0 | selective kindness : in europe , some refugees are more equal than others |
|1 | he said their efforts should not stop only at creating many graduates but also extended to students from poor families so that they could break away from the cycle of poverty |
### How to use
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
model = T5ForConditionalGeneration.from_pretrained("tosin/pcl_22")
tokenizer = T5Tokenizer.from_pretrained("t5-base") # use the source tokenizer because T5 finetuned tokenizer breaks
tokenizer.pad_token = tokenizer.eos_token
input_ids = tokenizer("he said their efforts should not stop only at creating many graduates but also extended to students from poor families so that they could break away from the cycle of poverty", padding=True, truncation=True, return_tensors='pt').input_ids
outputs = model.generate(input_ids)
pred = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(pred)
|
tpanza/dummy-model | 201b399727529916af8b04d315a1577f54a8eb90 | 2022-01-27T06:00:59.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | tpanza | null | tpanza/dummy-model | 2 | null | transformers | 24,812 | Entry not found |
trangdieu/roberta-base-retrained-6-epochs | 7be04de66d5a729fcca1b4967da714ad2b2756ae | 2021-06-02T17:56:07.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | trangdieu | null | trangdieu/roberta-base-retrained-6-epochs | 2 | null | transformers | 24,813 | Entry not found |
transformersbook/xlm-roberta-base-finetuned-panx-fr | b0af87014b62880f00a8c988fad201964bc08557 | 2022-02-05T17:07:57.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | transformersbook | null | transformersbook/xlm-roberta-base-finetuned-panx-fr | 2 | null | transformers | 24,814 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8454790823211876
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.2772
- F1: 0.8455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.562 | 1.0 | 191 | 0.3183 | 0.7828 |
| 0.2697 | 2.0 | 382 | 0.2706 | 0.8324 |
| 0.1735 | 3.0 | 573 | 0.2772 | 0.8455 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
transformersbook/xlm-roberta-base-finetuned-panx-it | 7826c072686ef209c62e967fcfb44d4f8fe4efbf | 2022-02-05T17:07:26.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | transformersbook | null | transformersbook/xlm-roberta-base-finetuned-panx-it | 2 | null | transformers | 24,815 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8215158924205379
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.2445
- F1: 0.8215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7594 | 1.0 | 70 | 0.3402 | 0.7467 |
| 0.2942 | 2.0 | 140 | 0.2555 | 0.7971 |
| 0.1814 | 3.0 | 210 | 0.2445 | 0.8215 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
trig/multiverse-second | 461dd70ad7ae66739dd0e891a96beb91aa3c0bb1 | 2021-08-30T20:15:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | trig | null | trig/multiverse-second | 2 | null | transformers | 24,816 | ---
tags:
- conversational
---
# multiverse but with swapped characters and more learning |
trig/tlok-test | 07ca9e1e3a20a95b5cab24809c7f2f931ead8122 | 2021-08-29T05:05:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | trig | null | trig/tlok-test | 2 | null | transformers | 24,817 | ---
tags:
- conversational
---
# some test idk |
tromedlov/t5-small-cnn | 17ae74d5fcb51ef5a811c1eec712f8edf42197ff | 2021-06-23T14:27:12.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tromedlov | null | tromedlov/t5-small-cnn | 2 | null | transformers | 24,818 | Entry not found |
troythewar/DialogGPT-small-harrypotter | a13cbd340dcb83c7ea22266807ff3798379e7a0c | 2021-09-03T05:23:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | troythewar | null | troythewar/DialogGPT-small-harrypotter | 2 | null | transformers | 24,819 | ---
tags:
- conversational
---
# Harry Potter DialogGPT |
turing1729/gpt-neo-1.3B-news | d1d4e1cade87f4c6ae030a102d18fc5f8d75ab79 | 2022-02-13T10:21:51.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"license:apache-2.0"
] | text-generation | false | turing1729 | null | turing1729/gpt-neo-1.3B-news | 2 | null | transformers | 24,820 | ---
license: apache-2.0
---
Fine-tuned on short news articles for summarization with GPT-neo 1.3B parameters |
tyoyo/t5-base-TEDxJP-1body-0context | 4d42c1bae1b89f8ecf9757a8b16602ce46071115 | 2021-12-03T02:16:23.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tyoyo | null | tyoyo/t5-base-TEDxJP-1body-0context | 2 | null | transformers | 24,821 | Entry not found |
tyoyo/t5-base-TEDxJP-1body-5context | 3e07c47c9bfadb218dd01371c523766c60e86683 | 2021-11-30T13:49:54.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tyoyo | null | tyoyo/t5-base-TEDxJP-1body-5context | 2 | null | transformers | 24,822 | Epoch Training Loss Validation Loss Wer Mer Wil Wip Hits Substitutions Deletions Insertions Cer
1 0.572400 0.447836 0.262284 0.241764 0.333088 0.666912 54709 7126 4673 5645 0.242417
2 0.492700 0.400297 0.203600 0.196446 0.285798 0.714202 55389 6777 4342 2422 0.183740
3 0.429200 0.385705 0.201179 0.193641 0.282458 0.717542 55717 6745 4046 2589 0.179833
4 0.408700 0.383085 0.198277 0.190817 0.280919 0.719081 55921 6867 3720 2600 0.177468
5 0.386100 0.381157 0.192488 0.186279 0.274890 0.725110 55923 6709 3876 2217 0.171644
6 0.353400 0.380517 0.193315 0.186615 0.275510 0.724490 56039 6747 3722 2388 0.170799
7 0.346100 0.379445 0.194713 0.187616 0.276780 0.723220 56074 6780 3654 2516 0.171347
8 0.314700 0.383521 0.196022 0.188486 0.277974 0.722026 56130 6820 3558 2659 0.179184
|
uclanlp/plbart-en_XX-java | a3c889b251b9128a3fa4fbbd9afc9a319d54a1ef | 2021-11-09T17:08:15.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-en_XX-java | 2 | null | transformers | 24,823 | Entry not found |
uclanlp/plbart-single_task-dynamic-generation | 254517c7e3b833ab8864b02fa9639c5aa9896a7f | 2022-03-02T07:16:49.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-single_task-dynamic-generation | 2 | null | transformers | 24,824 | Entry not found |
uclanlp/plbart-single_task-en_php | 8779774c91f2509079b6e1697cec41af6c8c8562 | 2022-03-02T07:11:11.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-single_task-en_php | 2 | null | transformers | 24,825 | Entry not found |
uclanlp/plbart-single_task-go_en | 3c12631d9d7ffd9276701c82f61246aecfb740d1 | 2022-03-02T07:01:07.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-single_task-go_en | 2 | null | transformers | 24,826 | Entry not found |
uclanlp/plbart-single_task-php_en | 78fcfa9b8cefbeb8b49031c4aec2ad60e33c6368 | 2022-03-02T07:03:40.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-single_task-php_en | 2 | null | transformers | 24,827 | Entry not found |
uclanlp/plbart-single_task-ruby_en | e83cd19c1baa3fced34694e891a846999c82af0b | 2022-03-02T06:59:58.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-single_task-ruby_en | 2 | null | transformers | 24,828 | Entry not found |
uclanlp/plbart-single_task-static-summarization | 9e25b5ddcd536c4f828ec524905b09d4744752c8 | 2022-03-02T07:23:18.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-single_task-static-summarization | 2 | null | transformers | 24,829 | Entry not found |
uclanlp/plbart-single_task-weak-summarization | f03b149c6e00582a9d72c41d1b3eb02ad70ebb88 | 2022-03-02T07:26:48.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-single_task-weak-summarization | 2 | null | transformers | 24,830 | Entry not found |
ueb1/IceBERT-finetuned-grouped | 02e88a7ec32be3092912cd4d269aec9a0eb00dea | 2021-11-24T00:18:29.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:gpl-3.0",
"model-index"
] | text-classification | false | ueb1 | null | ueb1/IceBERT-finetuned-grouped | 2 | null | transformers | 24,831 | ---
license: gpl-3.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: IceBERT-finetuned-grouped
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IceBERT-finetuned-grouped
This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5660
- Accuracy: 0.2259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 269 | 4.1727 | 0.1172 |
| 4.3535 | 2.0 | 538 | 3.8406 | 0.1632 |
| 4.3535 | 3.0 | 807 | 3.6718 | 0.2113 |
| 3.6711 | 4.0 | 1076 | 3.5660 | 0.2259 |
| 3.6711 | 5.0 | 1345 | 3.5332 | 0.2176 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
ueb1/IceBERT-finetuned | 5fbffbfe3e47907106c7ed4258732153904acc4b | 2021-11-23T01:05:03.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:gpl-3.0",
"model-index"
] | text-classification | false | ueb1 | null | ueb1/IceBERT-finetuned | 2 | null | transformers | 24,832 | ---
license: gpl-3.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: IceBERT-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IceBERT-finetuned
This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7361
- Accuracy: 0.352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.5309 | 1.0 | 563 | 4.1093 | 0.329 |
| 3.9723 | 2.0 | 1126 | 3.8339 | 0.344 |
| 3.6949 | 3.0 | 1689 | 3.7490 | 0.346 |
| 3.5124 | 4.0 | 2252 | 3.7488 | 0.358 |
| 3.3763 | 5.0 | 2815 | 3.7361 | 0.352 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
ufal/byt5-small-multilexnorm2021-en | 59c345a61a185187f548c48d80862caf79aa62ad | 2021-10-20T12:17:33.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:mc4",
"dataset:wikipedia",
"dataset:multilexnorm",
"arxiv:2105.13626",
"arxiv:1907.06292",
"transformers",
"lexical normalization",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | ufal | null | ufal/byt5-small-multilexnorm2021-en | 2 | null | transformers | 24,833 | ---
language: en
datasets:
- mc4
- wikipedia
- multilexnorm
tags:
- lexical normalization
license: apache-2.0
---
# Fine-tuned ByT5-small for MultiLexNorm (English version)

This is the official release of the fine-tuned models for **the winning entry** to the [*W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm)* shared task](https://noisy-text.github.io/2021/multi-lexnorm.html), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages.
Our system is based on [ByT5](https://arxiv.org/abs/2105.13626), which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these fine-tuned models, we also release the source files on [GitHub](https://github.com/ufal/multilexnorm2021) and an interactive demo on [Google Colab](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing).
## How to use
The model was *not* fine-tuned in a standard sentence-to-sentence setting – instead, it was tailored to the token-to-token definition of MultiLexNorm data. Please refer to [**the interactive demo on Colab notebook**](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing) to learn how to use these models.
## How to cite
```bibtex
@inproceedings{wnut-ufal,
title= "{ÚFAL} at {MultiLexNorm} 2021: Improving Multilingual Lexical Normalization by Fine-tuning {ByT5}",
author = "Samuel, David and Straka, Milan",
booktitle = "Proceedings of the 7th Workshop on Noisy User-generated Text (W-NUT 2021)",
year = "2021",
publisher = "Association for Computational Linguistics",
address = "Punta Cana, Dominican Republic"
}
```
## ByT5 - Small
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-small).
ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-small` significantly outperforms [mt5-small](https://huggingface.co/google/mt5-small) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626)
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
|
ufal/byt5-small-multilexnorm2021-sr | a0f88a00863a51bdf5747b8eb3a37b52e16708b9 | 2021-10-20T12:52:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"sr",
"dataset:mc4",
"dataset:wikipedia",
"dataset:multilexnorm",
"arxiv:2105.13626",
"arxiv:1907.06292",
"transformers",
"lexical normalization",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | ufal | null | ufal/byt5-small-multilexnorm2021-sr | 2 | null | transformers | 24,834 | ---
language: sr
datasets:
- mc4
- wikipedia
- multilexnorm
tags:
- lexical normalization
license: apache-2.0
---
# Fine-tuned ByT5-small for MultiLexNorm (Serbian version)

This is the official release of the fine-tuned models for **the winning entry** to the [*W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm)* shared task](https://noisy-text.github.io/2021/multi-lexnorm.html), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages.
Our system is based on [ByT5](https://arxiv.org/abs/2105.13626), which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these fine-tuned models, we also release the source files on [GitHub](https://github.com/ufal/multilexnorm2021) and an interactive demo on [Google Colab](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing).
## How to use
The model was *not* fine-tuned in a standard sentence-to-sentence setting – instead, it was tailored to the token-to-token definition of MultiLexNorm data. Please refer to [**the interactive demo on Colab notebook**](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing) to learn how to use these models.
## How to cite
```bibtex
@inproceedings{wnut-ufal,
title= "{ÚFAL} at {MultiLexNorm} 2021: Improving Multilingual Lexical Normalization by Fine-tuning {ByT5}",
author = "Samuel, David and Straka, Milan",
booktitle = "Proceedings of the 7th Workshop on Noisy User-generated Text (W-NUT 2021)",
year = "2021",
publisher = "Association for Computational Linguistics",
address = "Punta Cana, Dominican Republic"
}
```
## ByT5 - Small
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-small).
ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-small` significantly outperforms [mt5-small](https://huggingface.co/google/mt5-small) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626)
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
|
ufal/byt5-small-multilexnorm2021-trde | a5add5dda3b2cc92bbc059a498e987ab7e36278c | 2021-10-20T13:02:55.000Z | [
"pytorch",
"t5",
"text2text-generation",
"tr",
"de",
"dataset:mc4",
"dataset:wikipedia",
"dataset:multilexnorm",
"arxiv:2105.13626",
"arxiv:1907.06292",
"transformers",
"lexical normalization",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | ufal | null | ufal/byt5-small-multilexnorm2021-trde | 2 | null | transformers | 24,835 | ---
language:
- tr
- de
datasets:
- mc4
- wikipedia
- multilexnorm
tags:
- lexical normalization
license: apache-2.0
---
# Fine-tuned ByT5-small for MultiLexNorm (Turkish-German version)

This is the official release of the fine-tuned models for **the winning entry** to the [*W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm)* shared task](https://noisy-text.github.io/2021/multi-lexnorm.html), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages.
Our system is based on [ByT5](https://arxiv.org/abs/2105.13626), which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these fine-tuned models, we also release the source files on [GitHub](https://github.com/ufal/multilexnorm2021) and an interactive demo on [Google Colab](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing).
## How to use
The model was *not* fine-tuned in a standard sentence-to-sentence setting – instead, it was tailored to the token-to-token definition of MultiLexNorm data. Please refer to [**the interactive demo on Colab notebook**](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing) to learn how to use these models.
## How to cite
```bibtex
@inproceedings{wnut-ufal,
title= "{ÚFAL} at {MultiLexNorm} 2021: Improving Multilingual Lexical Normalization by Fine-tuning {ByT5}",
author = "Samuel, David and Straka, Milan",
booktitle = "Proceedings of the 7th Workshop on Noisy User-generated Text (W-NUT 2021)",
year = "2021",
publisher = "Association for Computational Linguistics",
address = "Punta Cana, Dominican Republic"
}
```
## ByT5 - Small
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-small).
ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-small` significantly outperforms [mt5-small](https://huggingface.co/google/mt5-small) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626)
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
|
umit/distilbert-base-uncased-finetuned-emotion | cfa8be9daa28a793894b0df5a38d2f970b5b273e | 2022-02-22T16:35:05.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | umit | null | umit/distilbert-base-uncased-finetuned-emotion | 2 | null | transformers | 24,836 | Entry not found |
unicamp-dl/mt5-base-en-pt-msmarco-v1 | 700ab114bf9b582566342387bfafd3cfca95827f | 2022-01-05T21:30:38.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"pt",
"dataset:msmarco",
"arxiv:2108.13897",
"transformers",
"msmarco",
"t5",
"tensorflow",
"pt-br",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | unicamp-dl | null | unicamp-dl/mt5-base-en-pt-msmarco-v1 | 2 | null | transformers | 24,837 | ---
language: pt
license: mit
tags:
- msmarco
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- msmarco
widget:
- text: "Texto de exemplo em português"
inference: false
---
# mt5-base Reranker finetuned on mMARCO
## Introduction
mT5-base-en-pt-msmarco-v1 is a mT5-based model fine-tuned on a bilingual version of MS MARCO passage dataset. This bilingual dataset version is formed by the original MS MARCO dataset (in English) and a Portuguese translated version. In the version v1, the Portuguese dataset was translated using [Helsinki](https://huggingface.co/Helsinki-NLP) NMT model.
Further information about the dataset or the translation method can be found on our paper [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
## Usage
```python
from transformers import T5Tokenizer, MT5ForConditionalGeneration
model_name = 'unicamp-dl/mt5-base-en-pt-msmarco-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use mt5-base-en-pt-msmarco-v1, please cite:
@misc{bonifacio2021mmarco,
title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
eprint={2108.13897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
unicamp-dl/ptt5-base-en-pt-msmarco-10k-v1 | 611202955bffed7512efd161bf6711df5a79ab2d | 2022-01-05T21:31:05.000Z | [
"pytorch",
"t5",
"text2text-generation",
"pt",
"dataset:msmarco",
"arxiv:2108.13897",
"transformers",
"msmarco",
"tensorflow",
"pt-br",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | unicamp-dl | null | unicamp-dl/ptt5-base-en-pt-msmarco-10k-v1 | 2 | null | transformers | 24,838 | ---
language: pt
license: mit
tags:
- msmarco
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- msmarco
widget:
- text: "Texto de exemplo em português"
inference: false
---
# PTT5-base Reranker finetuned on both English and Portuguese MS MARCO
## Introduction
ptt5-base-msmarco-en-pt-10k-v1 is a T5-based model pretrained in the BrWac corpus, fine-tuned on both English and Portuguese translated version of MS MARCO passage dataset. In the version v1, the Portuguese dataset was translated using [Helsinki](https://huggingface.co/Helsinki-NLP) NMT model. This model was finetuned for 10k steps.
Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'unicamp-dl/ptt5-base-msmarco-en-pt-10k-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use ptt5-base-msmarco-en-pt-10k-v1, please cite:
@misc{bonifacio2021mmarco,
title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
eprint={2108.13897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
unicamp-dl/ptt5-base-pt-msmarco-100k-v1 | da65a473a8d91f3d83b01909e6ee630f80bf0aee | 2022-01-05T21:29:11.000Z | [
"pytorch",
"t5",
"text2text-generation",
"pt",
"dataset:msmarco",
"arxiv:2108.13897",
"transformers",
"msmarco",
"tensorflow",
"pt-br",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | unicamp-dl | null | unicamp-dl/ptt5-base-pt-msmarco-100k-v1 | 2 | null | transformers | 24,839 | ---
language: pt
license: mit
tags:
- msmarco
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- msmarco
widget:
- text: "Texto de exemplo em português"
inference: false
---
# PTT5-base Reranker finetuned on Portuguese MS MARCO
## Introduction
ptt5-base-msmarco-pt-100k-v1 is a T5-based model pretrained in the BrWac corpus, finetuned on Portuguese translated version of MS MARCO passage dataset. In the version v1, the Portuguese dataset was translated using [Helsinki](https://huggingface.co/Helsinki-NLP) NMT model. This model was finetuned for 100k steps.
Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'unicamp-dl/ptt5-base-msmarco-pt-100k-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use ptt5-base-msmarco-pt-100k-v1, please cite:
@misc{bonifacio2021mmarco,
title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
eprint={2108.13897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
unicamp-dl/ptt5-base-pt-msmarco-100k-v2 | 44dc5b0c6517ba9c9e7d65aac58b053ae925a1d0 | 2022-01-06T13:44:21.000Z | [
"pytorch",
"t5",
"text2text-generation",
"pt",
"dataset:msmarco",
"arxiv:2108.13897",
"transformers",
"msmarco",
"tensorflow",
"pt-br",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | unicamp-dl | null | unicamp-dl/ptt5-base-pt-msmarco-100k-v2 | 2 | null | transformers | 24,840 | ---
language: pt
license: mit
tags:
- msmarco
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- msmarco
widget:
- text: "Texto de exemplo em português"
inference: false
---
# PTT5-base Reranker finetuned on Portuguese MS MARCO
## Introduction
ptt5-base-msmarco-pt-100k-v2 is a T5-based model pretrained in the BrWac corpus, finetuned on Portuguese translated version of MS MARCO passage dataset. In the v2 version, the Portuguese dataset was translated using Google Translate. This model was finetuned for 100k steps.
Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'unicamp-dl/ptt5-base-msmarco-pt-100k-v2'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use ptt5-base-msmarco-pt-100k-v2, please cite:
@misc{bonifacio2021mmarco,
title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
eprint={2108.13897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
unicamp-dl/ptt5-base-pt-msmarco-10k-v2 | f6e9757f00db313ac1412c6911dedc9144882776 | 2022-01-06T13:41:02.000Z | [
"pytorch",
"t5",
"text2text-generation",
"pt",
"dataset:msmarco",
"arxiv:2108.13897",
"transformers",
"msmarco",
"tensorflow",
"pt-br",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | unicamp-dl | null | unicamp-dl/ptt5-base-pt-msmarco-10k-v2 | 2 | null | transformers | 24,841 | ---
language: pt
license: mit
tags:
- msmarco
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- msmarco
widget:
- text: "Texto de exemplo em português"
inference: false
---
# PTT5-base Reranker finetuned on Portuguese MS MARCO
## Introduction
ptt5-base-msmarco-pt-10k-v2 is a T5-based model pretrained in the BrWac corpus, finetuned on Portuguese translated version of MS MARCO passage dataset. In the v2 version, the Portuguese dataset was translated using Google Translate. This model was finetuned for 10k steps.
Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'unicamp-dl/ptt5-base-msmarco-pt-10k-v2'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use ptt5-base-msmarco-pt-10k-v2, please cite:
@misc{bonifacio2021mmarco,
title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
eprint={2108.13897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
upskyy/kobart-summarization | 6d8712cfac8b9271cc211e0ed90b846d774d6726 | 2021-10-03T05:20:36.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | upskyy | null | upskyy/kobart-summarization | 2 | 1 | transformers | 24,842 | Entry not found |
uutkras/Pandabot | 825f2bb7991facdd189d26b3ed16eac6ebc9b003 | 2021-08-27T07:32:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | uutkras | null | uutkras/Pandabot | 2 | 1 | transformers | 24,843 | ---
tags:
- conversational
---
#ut friend |
vahmohh/t5-qag-base | 9f1d49169c3ef5e409604474ba4c8cc14388a027 | 2021-06-23T14:36:28.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | vahmohh | null | vahmohh/t5-qag-base | 2 | null | transformers | 24,844 | [www.github.com/vahmohh/masters-thesis](https://www.github.com/vahmohh/masters-thesis)
The model has been built upon the pre-trained T5 model by fine-tuning it on SQuAD dataset for the porpuse of automatic question and answer generation.
The following format should be used for generating questions.
```sh
generate question: domain_specific_text </sep> answer_1 </sep> answer_2 </sep> ... </sep> answer_n </end>
```
Output:
```sh
question_1 </sep> question_2 </sep> ... </sep> question_n </end>
```
The following format should be used for generating answers.
```sh
generate answer: domain_specific_text </end>
```
Output:
```sh
answer_1 </sep> answer_2 </sep> ... </sep> answer_n </end>
``` |
valhalla/s2t_librispeech_large | 2cf9cebc02dc4dbb7092476183e614edeed1f9ee | 2021-02-26T14:25:12.000Z | [
"pytorch",
"speech_to_text_transformer",
"text2text-generation",
"en",
"dataset:librispeech_asr",
"transformers",
"audio",
"automatic-speech-recognition",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | valhalla | null | valhalla/s2t_librispeech_large | 2 | null | transformers | 24,845 | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
license: apache-2.0
---
TODO: [To be filled]
## Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) *"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset
from transformers import Speech2TextTransformerForConditionalGeneration, Speech2TextTransformerTokenizer
import soundfile as sf
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
model = Speech2TextTransformerForConditionalGeneration.from_pretrained("valhalla/s2t_librispeech_large").to("cuda")
tokenizer = Speech2TextTransformerTokenizer.from_pretrained("valhalla/s2t_librispeech_large", do_upper_case=True)
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
features = tokenizer(batch["speech"], sample_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)
batch["transcription"] = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 3.3 | 7.5 | |
valhalla/s2t_mustc_en_fr_small | 12b820783f11000aa43cf3909bfd9c6e49def402 | 2021-02-26T14:34:11.000Z | [
"pytorch",
"speech_to_text_transformer",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | valhalla | null | valhalla/s2t_mustc_en_fr_small | 2 | null | transformers | 24,846 | Entry not found |
valurank/distilroberta-mbfc-bias-4class | 378ac409decb8b00d50c20150570a6b0df0aea7f | 2022-06-08T20:29:05.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:other",
"model-index"
] | text-classification | false | valurank | null | valurank/distilroberta-mbfc-bias-4class | 2 | null | transformers | 24,847 | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: distilroberta-mbfc-bias-4class
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-mbfc-bias-4class
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5336
- Acc: 0.8503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 12345
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.488 | 1.0 | 584 | 0.3702 | 0.8519 |
| 0.3544 | 2.0 | 1168 | 0.3531 | 0.8575 |
| 0.3602 | 3.0 | 1752 | 0.3068 | 0.8896 |
| 0.2555 | 4.0 | 2336 | 0.3560 | 0.8715 |
| 0.1695 | 5.0 | 2920 | 0.3896 | 0.8704 |
| 0.117 | 6.0 | 3504 | 0.5336 | 0.8503 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
vasudevgupta/dl-hack-gpt2-large | 98fd305889810be22a07a7c16a0521037bc22797 | 2021-05-23T13:34:31.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | vasudevgupta | null | vasudevgupta/dl-hack-gpt2-large | 2 | null | transformers | 24,848 | DL research papers **Title -> abstract**
**Using this model**
```python
from transformers import pipeline, GPT2LMHeadModel, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("vasudevgupta/dl-hack-gpt2-large")
model = GPT2LMHeadModel.from_pretrained("vasudevgupta/dl-hack-gpt2-large")
agent = pipeline("text-generation", model=model, tokenizer=tokenizer)
print(agent("An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", max_length=200))
``` |
vasudevgupta/mbart-bhasha-guj-eng | 3edb80a4d7d27c3efa9c7c9032306b551a871679 | 2021-05-12T03:30:44.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"dataset:pib",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | vasudevgupta | null | vasudevgupta/mbart-bhasha-guj-eng | 2 | null | transformers | 24,849 | ---
datasets: pib
widget:
- text: "હેય! હું વાસુદેવ ગુપ્તા છું"
---
mBART (a pre-trained model by Facebook) is pre-trained to de-noise multiple languages simultaneously with BART objective.
Checkpoint available in this repository is obtained after fine-tuning `facebook/mbart-large-cc25` on all samples (~60K) from Bhasha (pib_v1.3) Gujarati-English parallel corpus. This checkpoint gives decent results for Gujarati-english translation. |
veronica320/ADEPT_roberta-l | 02701e410da39ad078d0300c2dd9768be9c9d074 | 2022-05-03T02:28:02.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | veronica320 | null | veronica320/ADEPT_roberta-l | 2 | null | transformers | 24,850 | Entry not found |
vesteinn/IceBERT-QA | 2e3bc0af6ac0d1afbfe5522fdffd9b61cfe9f0a1 | 2021-07-19T11:25:25.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | vesteinn | null | vesteinn/IceBERT-QA | 2 | null | transformers | 24,851 | ----
language:
- is
thumbnail:
tags:
- icelandic
- qa
license:
datasets:
- ic3
- igc
metrics:
- em
- f1
widget:
- text: "Hvenær var Halldór Laxness í menntaskóla ?"
context: "Halldór Laxness ( Halldór Kiljan ) fæddist í Reykjavík 23. apríl árið 1902 og átti í fyrstu heima við Laugaveg en árið 1905 settist fjölskyldan að í Laxnesi í Mosfellssveit . Þar ólst Halldór upp en sótti skóla í Reykjavík á unglingsárum . Ungur hélt hann síðan utan og var langdvölum erlendis um árabil – í ýmsum Evrópulöndum og síðar í Ameríku . Þegar hann var heima bjó hann í Reykjavík þar til hann og kona hans , Auður Sveinsdóttir , byggðu sér húsið Gljúfrastein í Mosfellssveit og fluttu þangað árið 1945 . Þar var heimili þeirra alla tíð síðan og þar er nú safn til minningar um þau . Halldór lést 8. febrúar 1998 . Skólaganga Halldórs varð ekki löng . Árið 1918 hóf hann nám við Menntaskólann í Reykjavík en hafði lítinn tíma til að læra , enda var hann að skrifa skáldsögu , Barn náttúrunnar , sem kom út haustið 1919 – þá þegar var höfundurinn ungi farinn af landi brott . Sagan vakti þó nokkra athygli og í Alþýðublaðinu sagði m.a. : „ Og hver veit nema að Halldór frá Laxnesi eigi eftir að verða óskabarn íslensku þjóðarinnar . “ Upp frá þessu sendi Halldór frá sér bók nánast á hverju ári , stundum fleiri en eina , í yfir sex áratugi . Afköst hans voru með eindæmum ; hann skrifaði fjölda skáldsagna , sumar í nokkrum hlutum , leikrit , kvæði , smásagnasöfn og endurminningabækur og gaf auk þess út mörg greinasöfn og ritgerðir . Bækurnar eru fjölbreyttar en eiga það sameiginlegt að vera skrifaðar af einstakri stílgáfu , djúpum mannskilningi og víðtækri þekkingu á sögu og samfélagi . Þar birtast oft afgerandi skoðanir á þjóðfélagsmálum og sögupersónur eru margar einkar eftirminnilegar ; tilsvör þeirra og lunderni hafa orðið samofin þjóðarsálinni . Þekktustu verk Halldórs eru eflaust skáldsögurnar stóru og rismiklu , s.s. Salka Valka , Sjálfstætt fólk , Heimsljós , Íslandsklukkan og Gerpla , og raunar mætti telja upp mun fleiri ; Kvæðabók hans er í uppáhaldi hjá mörgum sem og minningabækurnar sem hann skrifaði á efri árum um æskuár sín ; af þekktum greinasöfnum og ritgerðum má nefna Alþýðubókina og Skáldatíma . Mikið hefur verið skrifað um verk og ævi skáldsins , en hér skal aðeins bent á ítarlega frásögn og greiningu Halldórs Guðmundssonar í bókinni Halldór Laxness – ævisaga ."
---
# IceBERT-QA
## Model description
This is an Icelandic reading comprehension Q&A model.
## Intended uses & limitations
This model is part of my MSc thesis about Q&A for Icelandic.
#### How to use
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("vesteinn/IceBERT-QA")
model = AutoModelForQuestionAnswering.from_pretrained("vesteinn/IceBERT-QA")
```
#### Limitations and bias
## Training data
Translated English datasets were used along with the Natural Questions in Icelandic dataset.
## Training procedure
## Eval results
### BibTeX entry and citation info
```bibtex
```
|
vesteinn/IceBERT-finetuned-iec-sentence | 920f88e179313866eb77a065a1fd209af1b1dbce | 2021-11-05T18:27:01.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:gpl-3.0",
"model-index"
] | text-classification | false | vesteinn | null | vesteinn/IceBERT-finetuned-iec-sentence | 2 | null | transformers | 24,852 | ---
license: gpl-3.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: IceBERT-finetuned-iec-sentence
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IceBERT-finetuned-iec-sentence
This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4438
- Matthews Correlation: 0.6062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 455 | 0.5283 | 0.4755 |
| 0.5696 | 2.0 | 910 | 0.4889 | 0.5272 |
| 0.4898 | 3.0 | 1365 | 0.4508 | 0.5793 |
| 0.4508 | 4.0 | 1820 | 0.4340 | 0.6042 |
| 0.4153 | 5.0 | 2275 | 0.4438 | 0.6062 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.8.0
- Datasets 1.15.1
- Tokenizers 0.10.3
|
vesteinn/XLMR-ENIS-finetuned-stsb | 0c39ba4063834130dd215162a40d04ef1c0590e1 | 2021-10-14T10:28:20.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:agpl-3.0",
"sentence-similarity",
"model-index"
] | sentence-similarity | false | vesteinn | null | vesteinn/XLMR-ENIS-finetuned-stsb | 2 | null | transformers | 24,853 | ---
license: agpl-3.0
pipeline_tag: sentence-similarity
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: XLMR-ENIS-finetuned-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8887885342806044
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-stsb
This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5232
- Pearson: 0.8915
- Spearmanr: 0.8888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| No log | 1.0 | 360 | 0.6330 | 0.8562 | 0.8570 |
| 1.2835 | 2.0 | 720 | 0.6368 | 0.8790 | 0.8781 |
| 0.4518 | 3.0 | 1080 | 0.5352 | 0.8883 | 0.8852 |
| 0.4518 | 4.0 | 1440 | 0.4881 | 0.8910 | 0.8885 |
| 0.288 | 5.0 | 1800 | 0.5232 | 0.8915 | 0.8888 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.0
- Tokenizers 0.10.3
|
vesteinn/XLMR-ENIS | 6eeab159d99df459361f3cf7921e8dfd502161cc | 2021-09-27T22:09:54.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | vesteinn | null | vesteinn/XLMR-ENIS | 2 | null | transformers | 24,854 | ----
language:
- is
- en
thumbnail:
tags:
- icelandic
- xlmr
license: agpl-3.0
datasets:
- ic3
- igc
- books3
pipeline: fill-mask
widget:
- text: "The capital of Iceland is<mask> ."
- text: "Höfuðborg Íslands er<mask> ."
---
# XLMR-ENIS
## Model description
This is a XLMR model trained on Icelandic and English text.
## Intended uses & limitations
This model is part of my MSc thesis about Q&A for Icelandic.
#### How to use
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("vesteinn/XLMR-ENIS")
model = AutoModelForMaskedLM.from_pretrained("vesteinn/XLMR-ENIS")
```
#### Limitations and bias
## Training data
## Training procedure
## Eval results
### BibTeX entry and citation info
```bibtex
```
|
vesteinn/open-qa-icelandic-english-densephrases | 34dd9a8d54612cc20f68872a019ee92c4ad90ff5 | 2021-09-30T10:40:18.000Z | [
"pytorch",
"xlm-roberta",
"transformers"
] | null | false | vesteinn | null | vesteinn/open-qa-icelandic-english-densephrases | 2 | null | transformers | 24,855 | Entry not found |
vibranium19/DialoGPT-medium-jake | e623b83922b40c70f62e2db21cc9cbefd06df459 | 2021-09-16T21:34:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | vibranium19 | null | vibranium19/DialoGPT-medium-jake | 2 | null | transformers | 24,856 | ---
tags:
- conversational
---
# Jake Peralta DialoGPT Model |
vidhur2k/mBERT-Arabic-Mono | 5e3ece52b85bf93358226078175a6a2f4a047a72 | 2021-12-03T06:01:59.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | vidhur2k | null | vidhur2k/mBERT-Arabic-Mono | 2 | null | transformers | 24,857 | Entry not found |
vidhur2k/mBERT-Danish-Mono | 6bd48d904e8d39af5d76d7ac12d7ec7a22e3044c | 2021-12-03T05:16:39.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | vidhur2k | null | vidhur2k/mBERT-Danish-Mono | 2 | null | transformers | 24,858 | Entry not found |
vidhur2k/mBERT-English-Mono | da2ad018405e488d32e35c65c40b2e562177a470 | 2021-12-03T11:33:02.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | vidhur2k | null | vidhur2k/mBERT-English-Mono | 2 | null | transformers | 24,859 | Entry not found |
vidhur2k/mBERT-Indonesian-Mono | b4c87c4ec52c863c2b264fee9a2316fa4c993cbc | 2021-12-03T20:20:09.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | vidhur2k | null | vidhur2k/mBERT-Indonesian-Mono | 2 | null | transformers | 24,860 | Entry not found |
vidhur2k/mBERT-RomanceLang | e2dc0e87d49e901f5c294d2cc2b3b814d8bd4622 | 2021-12-06T06:37:31.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | vidhur2k | null | vidhur2k/mBERT-RomanceLang | 2 | null | transformers | 24,861 | Entry not found |
vinaydngowda/xlnettest | b652464dacc1250426ce538e6ebd9b62d7dcedd0 | 2022-01-14T19:38:07.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | false | vinaydngowda | null | vinaydngowda/xlnettest | 2 | null | transformers | 24,862 | Entry not found |
vishnun/distilgpt2-finetuned-tamil-gpt | 1bed75d51c9d72094d37d48aec29238a7c370ea4 | 2021-08-16T14:25:43.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-generation | false | vishnun | null | vishnun/distilgpt2-finetuned-tamil-gpt | 2 | null | transformers | 24,863 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: distilgpt2-finetuned-tamil-gpt
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-tamil-gpt
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 228 | 4.4097 |
| No log | 2.0 | 456 | 4.4097 |
| 4.3169 | 3.0 | 684 | 4.4097 |
| 4.3169 | 4.0 | 912 | 4.4097 |
| 4.3116 | 5.0 | 1140 | 4.4097 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
visualjoyce/chengyubert_2stage_stage1_wwm_ext | 3525ae82a969973d70aed2e6ca19b91c91dbf596 | 2021-05-20T09:00:46.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
] | null | false | visualjoyce | null | visualjoyce/chengyubert_2stage_stage1_wwm_ext | 2 | null | transformers | 24,864 | Entry not found |
visualjoyce/transformers4vl-uniter-base | 20369e0344e1ea6f16a97d091f0297ab91be8ebb | 2021-07-10T10:48:06.000Z | [
"pytorch",
"uniter",
"transformers"
] | null | false | visualjoyce | null | visualjoyce/transformers4vl-uniter-base | 2 | null | transformers | 24,865 | Entry not found |
vocab-transformers/dense_encoder-msmarco-distilbert-word2vec256k_emb_updated | 893a984658621e00b4ea6526e0b6c0a69de0f062 | 2022-02-21T20:13:11.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | vocab-transformers | null | vocab-transformers/dense_encoder-msmarco-distilbert-word2vec256k_emb_updated | 2 | null | sentence-transformers | 24,866 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# dense_encoder-msmarco-distilbert-word2vec256k
**Note: Token embeddings where updated!**
This model is based on [msmarco-word2vec256000-distilbert-base-uncased](https://huggingface.co/nicoladecao/msmarco-word2vec256000-distilbert-base-uncased) with a 256k sized vocabulary initialized with word2vec.
It has been trained on MS MARCO using [MarginMSELoss](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/ms_marco/train_bi-encoder_margin-mse.py). See the train_script.py in this repository.
Performance:
- MS MARCO dev: 34.51 (MRR@10)
- TREC-DL 2019: 66.12 (nDCG@10)
- TREC-DL 2020: 68.62 (nDCG@10)
## Usage (Sentence-Transformers)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7858 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MarginMSELoss.MarginMSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 250, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
vocab-transformers/msmarco-distilbert-word2vec256k-MLM_400k | 1ffb2cb7965e594838c770287708a7d87e78433a | 2022-02-22T17:03:11.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | vocab-transformers | null | vocab-transformers/msmarco-distilbert-word2vec256k-MLM_400k | 2 | null | transformers | 24,867 | # Model
This model is based on [nicoladecao/msmarco-word2vec256000-distilbert-base-uncased](https://huggingface.co/nicoladecao/msmarco-word2vec256000-distilbert-base-uncased) with a 256k sized vocabulary initialized with word2vec.
This model has been trained with MLM on the MS MARCO corpus collection for 400k steps. See train_mlm.py for the train script. It was run on 2x V100 GPUs. The word embedding matrix was frozen.
|
vovaf709/bert_mlm_negative | 03a5bcd86c4d310abe87dbaf0d3eca5265335b46 | 2021-12-17T16:33:44.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | vovaf709 | null | vovaf709/bert_mlm_negative | 2 | null | transformers | 24,868 | Entry not found |
vovaf709/bert_mlm_positive | 9f8338564508651c4af998cef8c7e75404a83c45 | 2021-12-17T16:34:39.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | vovaf709 | null | vovaf709/bert_mlm_positive | 2 | null | transformers | 24,869 | Entry not found |
voxmenthe/distilbert-base-uncased-finetuned-emotion | 746ce9e0da6ad217bf93be03dce97a82afeed228 | 2022-02-14T02:13:02.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | voxmenthe | null | voxmenthe/distilbert-base-uncased-finetuned-emotion | 2 | null | transformers | 24,870 | Entry not found |
vr25/fin_BERT-v1 | cf7b9681617452d9afa4a2c131de0306e73edd2e | 2021-05-20T23:05:00.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | vr25 | null | vr25/fin_BERT-v1 | 2 | null | transformers | 24,871 | Entry not found |
vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-qat-lt | 0e842dc7c75139903d38f8a3e23b0d782c736e84 | 2022-02-08T22:58:08.000Z | [
"pytorch",
"onnx",
"bert",
"transformers"
] | null | false | vuiseng9 | null | vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-qat-lt | 2 | null | transformers | 24,872 | This model is a downstream optimization of [```vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt```](https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt) using [OpenVINO/NNCF](https://github.com/openvinotoolkit/nncf). Applied optimization includes:
1. magnitude sparsification at 57.92% upon initialization so that sparsity over all linear layers of bert-base is at 90%. Parameters are ranked globally via thier absolute norm. Only linear layers of self-attention and ffnn are targeted.
2. NNCF Quantize-Aware Training - Symmetric 8-bit for both weight and activation on all learnable layers.
3. Custom distillation with large model ```bert-large-uncased-whole-word-masking-finetuned-squad```
```
eval_exact_match = 80.4541
eval_f1 = 87.6832
eval_samples = 10784
```
# Setup
```bash
# OpenVINO/NNCF
git clone https://github.com/vuiseng9/nncf && cd nncf
git checkout tld-poc
git reset --hard 1dec7afe7a4b567c059fcf287ea2c234980fded2
python setup.py develop
pip install -r examples/torch/requirements.txt
# Huggingface nn_pruning
git clone https://github.com/vuiseng9/nn_pruning && cd nn_pruning
git checkout reproduce-evaluation
git reset --hard 2d4e196d694c465e43e5fbce6c3836d0a60e1446
pip install -e ".[dev]"
# Huggingface Transformers
git clone https://github.com/vuiseng9/transformers && cd transformers
git checkout tld-poc
git reset --hard 10a1e29d84484e48fd106f58957d9ffc89dc43c5
pip install -e .
head -n 1 examples/pytorch/question-answering/requirements.txt | xargs -i pip install {}
# Additional dependencies
pip install onnx
```
# Train
```bash
git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt
BASE_MODEL=/path/to/cloned_repo_above #to-revise
wget https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-qat-lt/raw/main/nncf_bert_squad_sparsity.json
NNCF_CFG=/path/to/downloaded_nncf_cfg_above #to-revise
OUTROOT=/path/to/train_output_root #to-revise
WORKDIR=transformers/examples/pytorch/question-answering #to-revise
RUNID=bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-qat-lt
cd $WORKDIR
OUTDIR=$OUTROOT/$RUNID
mkdir -p $OUTDIR
export CUDA_VISIBLE_DEVICES=0
NEPOCH=5
python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \
--optimize_model_before_eval \
--optimized_checkpoint $BASE_MODEL \
--dataset_name squad \
--do_eval \
--do_train \
--evaluation_strategy steps \
--eval_steps 250 \
--learning_rate 3e-5 \
--lr_scheduler_type cosine_with_restarts \
--warmup_ratio 0.25 \
--cosine_cycles 1 \
--teacher bert-large-uncased-whole-word-masking-finetuned-squad \
--teacher_ratio 0.9 \
--num_train_epochs $NEPOCH \
--per_device_eval_batch_size 128 \
--per_device_train_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 250 \
--nncf_config $NNCF_CFG \
--logging_steps 1 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR
```
# Eval
This repo must be cloned locally.
```bash
git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-qat-lt
MODELROOT=/path/to/cloned_repo_above #to-revise
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-qat-lt
WORKDIR=transformers/examples/pytorch/question-answering #to-revise
cd $WORKDIR
mkdir $OUTDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \
--dataset_name squad \
--optimize_model_before_eval \
--qat_checkpoint $MODELROOT/checkpoint-21750 \
--nncf_config $MODELROOT/nncf_bert_squad_sparsity.json \
--to_onnx $OUTDIR/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-qat-lt.onnx \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
### tile-alignment
to evaluate tile-alignment checkpoint, add ```--tile_alignment``` and point ```--qat_checkpoint``` to checkpoint with 'tilealigned' postfix. Use branch ```tld-poc``` with commit id ```c525c52cq``` |
vuiseng9/bert-base-squadv1-pruneofa-90pc-bt | b3285931cfd5020d62b6d77e451ec7ee60c95291 | 2022-01-18T19:13:21.000Z | [
"pytorch",
"onnx",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | vuiseng9 | null | vuiseng9/bert-base-squadv1-pruneofa-90pc-bt | 2 | null | transformers | 24,873 | This model is transfer-learning of [bert-base pruneofa 90% sparse](https://huggingface.co/Intel/bert-base-uncased-sparse-90-unstructured-pruneofa) on Squadv1 dataset.
```
eval_exact_match = 80.2933
eval_f1 = 87.6788
eval_samples = 10784
```
# Train
use https://github.com/IntelLabs/Model-Compression-Research-Package.git
see ```pruneofa-transfer-learning.sh```
# Eval
```bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-squadv1-pruneofa-90pc-bt
WORKDIR=transformers/examples/pytorch/question-answering
cd $WORKDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-pruneofa-90pc-bt \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
vuiseng9/bert-base-uncased-squadv1-72.9-sparse | 8371e81c0b17580760fa6d52710cf3647003ae56 | 2021-11-11T18:13:18.000Z | [
"pytorch",
"tf",
"bert",
"transformers"
] | null | false | vuiseng9 | null | vuiseng9/bert-base-uncased-squadv1-72.9-sparse | 2 | null | transformers | 24,874 | * A set of unstructured sparse bert-base-uncased models fine-tuned for SQuADv1.
* Tensorflow models are created using ```TFAutoModelForQuestionAnswering.from_pretrained(..., from_pt=True)``` and ```model.save_pretrained(tf_pth)```.
* Observed issue - loss in model translation, discrepancy observed in evaluation between pytorch and tensorflow models.
* Table below is evaluated in HF's transformers v4.9.2. Sparsity is normalized to dense layers in attention heads and FFNN.
* Evaluation cli:
```bash
python run_qa.py \
--model_name_or_path <model identifier> \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 384 \
--max_seq_length 68 \
--doc_stride 26 \
--output_dir /tmp/eval-squad
```
| | HF Model Hub Identifier | sparsity | em (pytorch) | em (tf) | f1 (pytorch) | f1 (tf) |
|---:|:------------------------------------------------------------------------------------------------------------------------|-----------:|---------------:|----------:|---------------:|----------:|
| 0 | [vuiseng9/bert-base-uncased-squadv1-85.4-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-85.4-sparse) | 85.4 | 69.9338 | 14.2573 | 77.6861 | 23.4917 |
| 1 | [vuiseng9/bert-base-uncased-squadv1-72.9-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-72.9-sparse) | 72.9 | 74.6358 | 31.0596 | 82.2555 | 39.8446 |
| 2 | [vuiseng9/bert-base-uncased-squadv1-65.1-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-65.1-sparse) | 65.1 | 76.1306 | 43.0274 | 83.4117 | 51.4300 |
| 3 | [vuiseng9/bert-base-uncased-squadv1-59.6-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-59.6-sparse) | 59.6 | 76.8590 | 50.4920 | 84.1267 | 59.0881 |
| 4 | [vuiseng9/bert-base-uncased-squadv1-52.0-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-52.0-sparse) | 52.0 | 78.0038 | 54.2857 | 85.2000 | 62.2914 | |
vuiseng9/pegasus-arxiv | 86ce966387a7da19c81bdb084d700b431360a6ed | 2021-12-21T02:23:21.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | vuiseng9 | null | vuiseng9/pegasus-arxiv | 2 | null | transformers | 24,875 | This model is developed with transformers v4.13 with minor patch in this [fork](https://github.com/vuiseng9/transformers/tree/pegasus-v4p13).
# Setup
```bash
git clone https://github.com/vuiseng9/transformers
cd transformers
git checkout pegasus-v4p13 && git reset --hard 41eeb07
# installation, set summarization dependency
# . . .
```
# Train
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=0,1,2,3
NEPOCH=10
RUNID=pegasus-arxiv-${NEPOCH}eph-run1
OUTDIR=/data1/vchua/pegasus-hf4p13/pegasus-ft/${RUNID}
mkdir -p $OUTDIR
python run_summarization.py \
--model_name_or_path google/pegasus-large \
--dataset_name ccdv/arxiv-summarization \
--do_train \
--adafactor \
--learning_rate 8e-4 \
--label_smoothing_factor 0.1 \
--num_train_epochs $NEPOCH \
--per_device_train_batch_size 2 \
--do_eval \
--per_device_eval_batch_size 2 \
--num_beams 8 \
--max_source_length 1024 \
--max_target_length 256 \
--evaluation_strategy steps \
--eval_steps 10000 \
--save_strategy steps \
--save_steps 5000 \
--logging_steps 1 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR > $OUTDIR/run.log 2>&1 &
```
# Eval
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=3
DT=$(date +%F_%H-%M)
RUNID=pegasus-arxiv-${DT}
OUTDIR=/data1/vchua/pegasus-hf4p13/pegasus-eval/${RUNID}
mkdir -p $OUTDIR
python run_summarization.py \
--model_name_or_path vuiseng9/pegasus-arxiv \
--dataset_name ccdv/arxiv-summarization \
--max_source_length 1024 \
--max_target_length 256 \
--do_predict \
--per_device_eval_batch_size 8 \
--predict_with_generate \
--num_beams 8 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR > $OUTDIR/run.log 2>&1 &
```
Although fine-tuning is carried out for 5 epochs, this model is the checkpoint @150000 steps, 5.91 epoch, 34hrs) with lowest eval loss during training. Test/predict with this checkpoint should give results below. Note that we observe model at 80000 steps is closed to published result from HF.
```
***** predict metrics *****
predict_gen_len = 210.0925
predict_loss = 1.7192
predict_rouge1 = 46.1383
predict_rouge2 = 19.1393
predict_rougeL = 27.7573
predict_rougeLsum = 41.583
predict_runtime = 2:40:25.86
predict_samples = 6440
predict_samples_per_second = 0.669
predict_steps_per_second = 0.084
``` |
vuiseng9/pegasus-xsum | 66280b21a24f22c0b81b09387df05abb879f8689 | 2022-01-23T02:33:40.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | vuiseng9 | null | vuiseng9/pegasus-xsum | 2 | null | transformers | 24,876 | This model is developed with transformers v4.13 with minor patch in this [fork](https://github.com/vuiseng9/transformers/tree/pegasus-v4p13).
# Setup
```bash
git clone https://github.com/vuiseng9/transformers
cd transformers
git checkout pegasus-v4p13 && git reset --hard 3db4b452
# installation, set summarization dependency
# . . .
```
# Train
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=0,1 # 2 cards on xsum
NEPOCH=10
RUNID=pegasus-xsum-${NEPOCH}eph-run1
OUTDIR=/data1/vchua/pegasus-hf4p13/pegasus/${RUNID}
mkdir -p $OUTDIR
nohup python run_summarization.py \
--model_name_or_path google/pegasus-large \
--dataset_name xsum \
--do_train \
--adafactor \
--learning_rate 1e-4 \
--label_smoothing_factor 0.1 \
--num_train_epochs $NEPOCH \
--per_device_train_batch_size 8 \
--do_eval \
--per_device_eval_batch_size 8 \
--num_beams 8 \
--max_source_length 512 \
--max_target_length 64 \
--evaluation_strategy steps \
--eval_steps 1000 \
--save_strategy steps \
--save_steps 2000 \
--logging_steps 1 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR > $OUTDIR/run.log 2>&1
```
# Eval
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=3
DT=$(date +%F_%H-%M)
RUNID=pegasus-xsum-${DT}
OUTDIR=/data1/vchua/pegasus-hf4p13/pegasus-test/${RUNID}
mkdir -p $OUTDIR
nohup python run_summarization.py \
--model_name_or_path vuiseng9/pegasus-xsum \
--dataset_name xsum \
--max_source_length 512 \
--max_target_length 64 \
--do_predict \
--per_device_eval_batch_size 16 \
--predict_with_generate \
--num_beams 8 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR > $OUTDIR/run.log 2>&1 &
```
Although fine-tuning is carried out for 10 epochs, this model is the checkpoint (@62000 steps, 4.9epoch, 20hrs) with lower loss during training. Test/predict with this checkpoint should give results below.
```
***** predict metrics *****
predict_gen_len = 24.0499
predict_loss = 1.5801
predict_rouge1 = 47.2124
predict_rouge2 = 24.3673
predict_rougeL = 39.0055
predict_rougeLsum = 39.0007
predict_runtime = 0:34:23.32
predict_samples = 11334
predict_samples_per_second = 5.493
predict_steps_per_second = 0.344
``` |
vuiseng9/wav2vec2-base-100h | 70009d9636773b38763a0ec67d18b5d0be5a134e | 2022-01-27T20:03:25.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"transformers",
"audio",
"license:apache-2.0"
] | automatic-speech-recognition | false | vuiseng9 | null | vuiseng9/wav2vec2-base-100h | 2 | null | transformers | 24,877 | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
license: apache-2.0
---
# Wav2Vec2-Base-100h
This is a fork of [```facebook/wav2vec2-base-100h```](https://huggingface.co/facebook/wav2vec2-base-100h)
### Changes & Notes
1. Document reproducible evaluation (below) to new transformer and datasets version.
2. Use batch size of 1 to reproduce results.
3. Validated with ```transformers v4.15.0```, ```datasets 1.18.0```
4. You may need to manually install pypkg ```librosa```, ```jiwer```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-base-100h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import soundfile as sf
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
# librispeech_eval = load_dataset("librispeech_asr", "other", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-100h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-100h")
def map_to_array(batch):
# speech, _ = sf.read(batch["file"])
# batch["speech"] = speech
batch["speech"] = batch['audio']['array']
return batch
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
input_values = processor(batch["speech"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean/test" | "other/test" |
|--------------| ------------|
| 6.1 | 13.5 |
|
vutankiet2901/wav2vec2-large-xlsr-53-ja | 080e96b48dd0e7f4b9adbca46a2bf79af0ad823f | 2022-03-23T18:28:48.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"common-voice",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | vutankiet2901 | null | vutankiet2901/wav2vec2-large-xlsr-53-ja | 2 | 1 | transformers | 24,878 | ---
license: apache-2.0
language:
- ja
tags:
- automatic-speech-recognition
- common-voice
- hf-asr-leaderboard
- ja
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xlsr-53-ja
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_8_0
args: ja
metrics:
- name: Test WER (with LM)
type: wer
value: 15.37
- name: Test CER (with LM)
type: cer
value: 6.91
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: ja
metrics:
- name: Test WER (with LM)
type: wer
value: 16.09
- name: Test CER (with LM)
type: cer
value: 7.15
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ja
metrics:
- name: Test WER (with LM)
type: wer
value: 37.96
- name: Test CER (with LM)
type: cer
value: 21.11
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ja
metrics:
- name: Test CER
type: cer
value: 26.02
---
## Model description
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - JA dataset.
### Benchmark WER result:
| | [COMMON VOICE 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0)
|---|---|---|
|without LM| 15.74 | 25.10 |
|with 4-grams LM| 15.37 | 16.09 |
### Benchmark CER result:
| | [COMMON VOICE 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0)
|---|---|---|
|without LM| 9.51 | 9.95 |
|with 4-grams LM| 6.91 | 7.15 |
## Evaluation
Please use the eval.py file to run the evaluation:
```python
python eval.py --model_id vutankiet2901/wav2vec2-large-xlsr-53-ja --dataset mozilla-foundation/common_voice_7_0 --config ja --split test --log_outputs
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 4.7776 | 4.73 | 1500 | 2.9540 | 0.9772 | 0.8489 |
| 1.9076 | 9.46 | 3000 | 0.7146 | 0.5371 | 0.2484 |
| 1.507 | 14.2 | 4500 | 0.5843 | 0.4689 | 0.2196 |
| 1.3742 | 18.93 | 6000 | 0.5286 | 0.4321 | 0.1988 |
| 1.2776 | 23.66 | 7500 | 0.5007 | 0.4056 | 0.1870 |
| 1.2003 | 28.39 | 9000 | 0.4676 | 0.3848 | 0.1802 |
| 1.1281 | 33.12 | 10500 | 0.4524 | 0.3694 | 0.1720 |
| 1.0657 | 37.85 | 12000 | 0.4449 | 0.3590 | 0.1681 |
| 1.0129 | 42.59 | 13500 | 0.4266 | 0.3423 | 0.1617 |
| 0.9691 | 47.32 | 15000 | 0.4214 | 0.3375 | 0.1587 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
vxvxx/t5-small-finetuned-no_paragraph-to-yes_paragraph-2 | 229891aeffba01febbc56d42e32ea3bb59770a9e | 2022-02-16T07:13:28.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | vxvxx | null | vxvxx/t5-small-finetuned-no_paragraph-to-yes_paragraph-2 | 2 | null | transformers | 24,879 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-small-finetuned-no_paragraph-to-yes_paragraph-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-no_paragraph-to-yes_paragraph-2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Bleu: 0.0
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:----:|:-------:|
| 0.006 | 1.0 | 8081 | 0.0002 | 0.0 | 19.0 |
| 0.0032 | 2.0 | 16162 | 0.0001 | 0.0 | 19.0 |
| 0.0026 | 3.0 | 24243 | 0.0001 | 0.0 | 19.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
w11wo/indo-gpt2-small | d5cca3adcf47fcadfb6b6f08f8bb5ad44303aed8 | 2021-05-23T13:41:42.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"id",
"dataset:wikipedia",
"transformers",
"indo-gpt2-small",
"license:mit"
] | text-generation | false | w11wo | null | w11wo/indo-gpt2-small | 2 | null | transformers | 24,880 | ---
language: id
tags:
- indo-gpt2-small
license: mit
datasets:
- wikipedia
widget:
- text: "Nama saya Budi, dari Indonesia"
---
## Indo GPT-2 Small
Indo GPT-2 Small is a language model based on the [GPT-2 model](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). It was trained on the latest (late December 2020) Indonesian Wikipedia articles.
The model was originally HuggingFace's pretrained [English GPT-2 model](https://huggingface.co/transformers/model_doc/gpt2.html) and is later fine-tuned on the Indonesian dataset. Many of the techniques used
are based on a [notebook](https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb)/[blog](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787) shared by [Pierre Guillou](https://medium.com/@pierre_guillou), where Pierre Guillou fine-tuned the English GPT-2 model on a Portuguese dataset.
Frameworks used include HuggingFace's [Transformers](https://huggingface.co/transformers) and fast.ai's [Deep Learning library](https://docs.fast.ai/). PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training /Validation data (text) |
|-------------------|---------|-------------|---------------------------------------|
| `indo-gpt2-small` | 124M | GPT-2 Small | Indonesian Wikipedia (3.1 GB of text) |
## Evaluation Results
The model was trained for only 1 epoch and the following is the final result once the training ended.
| epoch | train loss | valid loss | perplexity | total time |
|-------|------------|------------|------------|------------|
| 0 | 2.981 | 2.936 | 18.85 | 2:45:25 |
## How to Use (PyTorch)
### Load Model and Byte-level Tokenizer
```python
from transformers import GPT2TokenizerFast, GPT2LMHeadModel
pretrained_name = "w11wo/indo-gpt2-small"
tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_name)
tokenizer.model_max_length = 1024
model = GPT2LMHeadModel.from_pretrained(pretrained_name)
```
### Generate a Sequence
```python
# sample prompt
prompt = "Nama saya Budi, dari Indonesia"
input_ids = tokenizer.encode(prompt, return_tensors='pt')
model.eval()
# generate output using top-k sampling
sample_outputs = model.generate(input_ids,
pad_token_id=50256,
do_sample=True,
max_length=40,
min_length=40,
top_k=40,
num_return_sequences=1)
for i, sample_output in enumerate(sample_outputs):
print(tokenizer.decode(sample_output.tolist()))
```
## Disclaimer
Do remember that although the dataset originated from Wikipedia, the model may not always generate factual texts. Additionally, the biases which came from the Wikipedia articles may be carried over into the results of this model.
## Credits
Major thanks to Pierre Guillou for sharing his work, which did not only enable me to realize this project but also taught me tons of new, exciting stuff.
## Author
Indo GPT-2 Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
|
w11wo/javanese-bert-small-imdb | 454de0c243ef2df4f7c276a9ae5c771c2ea08ed5 | 2022-02-14T16:19:18.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"jv",
"dataset:w11wo/imdb-javanese",
"arxiv:1810.04805",
"transformers",
"javanese-bert-small-imdb",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | w11wo | null | w11wo/javanese-bert-small-imdb | 2 | null | transformers | 24,881 | ---
language: jv
tags:
- javanese-bert-small-imdb
license: mit
datasets:
- w11wo/imdb-javanese
widget:
- text: "Fast and Furious iku film sing [MASK]."
---
## Javanese BERT Small IMDB
Javanese BERT Small IMDB is a masked language model based on the [BERT model](https://arxiv.org/abs/1810.04805). It was trained on Javanese IMDB movie reviews.
The model was originally the pretrained [Javanese BERT Small model](https://huggingface.co/w11wo/javanese-bert-small) and is later fine-tuned on the Javanese IMDB movie review dataset. It achieved a perplexity of 19.87 on the validation dataset. Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger).
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|----------------------------|----------|----------------|---------------------------------|
| `javanese-bert-small-imdb` | 110M | BERT Small | Javanese IMDB (47.5 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | perplexity | total time |
|------------|------------|------------|-------------|
| 3.070 | 2.989 | 19.87 | 3:12:33 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-bert-small-imdb"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Aku mangan sate ing [MASK] bareng konco-konco")
```
### Feature Extraction in PyTorch
```python
from transformers import BertModel, BertTokenizerFast
pretrained_name = "w11wo/javanese-bert-small-imdb"
model = BertModel.from_pretrained(pretrained_name)
tokenizer = BertTokenizerFast.from_pretrained(pretrained_name)
prompt = "Indonesia minangka negara gedhe."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do consider the biases which came from the IMDB review that may be carried over into the results of this model.
## Author
Javanese BERT Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation
If you use any of our models in your research, please cite:
```bib
@inproceedings{wongso2021causal,
title={Causal and Masked Language Modeling of Javanese Language using Transformer-based Architectures},
author={Wongso, Wilson and Setiawan, David Samuel and Suhartono, Derwin},
booktitle={2021 International Conference on Advanced Computer Science and Information Systems (ICACSIS)},
pages={1--7},
year={2021},
organization={IEEE}
}
```
|
w11wo/sundanese-roberta-base-emotion-classifier | 4cc2e9561a5324f5a670b4a04921504e9ec06220 | 2022-02-26T13:15:29.000Z | [
"pytorch",
"tf",
"roberta",
"text-classification",
"su",
"arxiv:1907.11692",
"transformers",
"sundanese-roberta-base-emotion-classifier",
"license:mit"
] | text-classification | false | w11wo | null | w11wo/sundanese-roberta-base-emotion-classifier | 2 | null | transformers | 24,882 | ---
language: su
tags:
- sundanese-roberta-base-emotion-classifier
license: mit
widget:
- text: "Wah, éta gélo, keren pisan!"
---
## Sundanese RoBERTa Base Emotion Classifier
Sundanese RoBERTa Base Emotion Classifier is an emotion-text-classification model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Sundanese RoBERTa Base](https://hf.co/w11wo/sundanese-roberta-base) model, which is then fine-tuned on the [Sundanese Twitter dataset](https://github.com/virgantara/sundanese-twitter-dataset), consisting of Sundanese tweets.
10% of the dataset is kept for evaluation purposes. After training, the model achieved an evaluation accuracy of 98.41% and F1-macro of 98.43%.
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ------------------------------------------- | ------- | ------------ | ------------------------------- |
| `sundanese-roberta-base-emotion-classifier` | 124M | RoBERTa Base | Sundanese Twitter dataset |
## Evaluation Results
The model was trained for 10 epochs and the best model was loaded at the end.
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
| ----- | ------------- | --------------- | -------- | -------- | --------- | -------- |
| 1 | 0.801800 | 0.293695 | 0.900794 | 0.899048 | 0.903466 | 0.900406 |
| 2 | 0.208700 | 0.185291 | 0.936508 | 0.935520 | 0.939460 | 0.935540 |
| 3 | 0.089700 | 0.150287 | 0.956349 | 0.956569 | 0.956500 | 0.958612 |
| 4 | 0.025600 | 0.130889 | 0.972222 | 0.972865 | 0.973029 | 0.973184 |
| 5 | 0.002200 | 0.100031 | 0.980159 | 0.980430 | 0.980430 | 0.980430 |
| 6 | 0.001300 | 0.104971 | 0.980159 | 0.980430 | 0.980430 | 0.980430 |
| 7 | 0.000600 | 0.107744 | 0.980159 | 0.980174 | 0.980814 | 0.979743 |
| 8 | 0.000500 | 0.102327 | 0.980159 | 0.980171 | 0.979970 | 0.980430 |
| 9 | 0.000500 | 0.101935 | 0.984127 | 0.984376 | 0.984073 | 0.984741 |
| 10 | 0.000400 | 0.105965 | 0.984127 | 0.984142 | 0.983720 | 0.984741 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "sundanese-roberta-base-emotion-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Wah, éta gélo, keren pisan!")
```
## Disclaimer
Do consider the biases which come from both the pre-trained RoBERTa model and the Sundanese Twitter dataset that may be carried over into the results of this model.
## Author
Sundanese RoBERTa Base Emotion Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation Information
```bib
@article{rs-907893,
author = {Wongso, Wilson
and Lucky, Henry
and Suhartono, Derwin},
journal = {Journal of Big Data},
year = {2022},
month = {Feb},
day = {26},
abstract = {The Sundanese language has over 32 million speakers worldwide, but the language has reaped little to no benefits from the recent advances in natural language understanding. Like other low-resource languages, the only alternative is to fine-tune existing multilingual models. In this paper, we pre-trained three monolingual Transformer-based language models on Sundanese data. When evaluated on a downstream text classification task, we found that most of our monolingual models outperformed larger multilingual models despite the smaller overall pre-training data. In the subsequent analyses, our models benefited strongly from the Sundanese pre-training corpus size and do not exhibit socially biased behavior. We released our models for other researchers and practitioners to use.},
issn = {2693-5015},
doi = {10.21203/rs.3.rs-907893/v1},
url = {https://doi.org/10.21203/rs.3.rs-907893/v1}
}
``` |
walkacross/my-awesome-model | 0b860f6829f49f7175e9d2cceae834b43101551c | 2021-08-13T04:25:32.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | walkacross | null | walkacross/my-awesome-model | 2 | null | transformers | 24,883 | Entry not found |
wbmitcast/bert_finetuning_test_0925 | 2f31c81c33941f4e40c403be9cff53168e1fcfa8 | 2021-09-25T01:43:13.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | wbmitcast | null | wbmitcast/bert_finetuning_test_0925 | 2 | null | transformers | 24,884 | Entry not found |
wesam266/wav2vec2-xls-r-300m_english | c76993126157ad1a55c1deabc0c1d5b0fc255c34 | 2022-01-22T20:33:24.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | wesam266 | null | wesam266/wav2vec2-xls-r-300m_english | 2 | null | transformers | 24,885 | Entry not found |
willemjan/gado_gado | ca78f343c0c21fc50c2e4b815fffb22c241fd718 | 2021-05-26T11:03:16.000Z | [
"pytorch"
] | null | false | willemjan | null | willemjan/gado_gado | 2 | null | null | 24,886 | Entry not found |
wilsoncwc/dontpatronizeme | d934a7e21c13df88c7ee79c40d49892c802db243 | 2022-02-09T14:58:45.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | wilsoncwc | null | wilsoncwc/dontpatronizeme | 2 | null | transformers | 24,887 | Entry not found |
wisdomify/wisdomify | d8fabaf0dff45e4a7a54042e010caffeffdb732d | 2021-09-22T10:34:48.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | wisdomify | null | wisdomify/wisdomify | 2 | null | transformers | 24,888 | test |
wudi7758521521/model_ankai | 44acc6376c95c50d2592dab9680a86a9e3544635 | 2021-07-30T04:25:28.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | wudi7758521521 | null | wudi7758521521/model_ankai | 2 | null | transformers | 24,889 | Entry not found |
xhluca/tapas-nq-hn-retriever-medium-1 | a1b5a084f3c760b1a97063704b2ce78a401a842f | 2022-02-10T02:45:54.000Z | [
"pytorch",
"tapas",
"feature-extraction",
"transformers"
] | feature-extraction | false | xhluca | null | xhluca/tapas-nq-hn-retriever-medium-1 | 2 | null | transformers | 24,890 | Entry not found |
xhyi/distilLED1_08_31_2021_v3 | bd95910b3071b6f1ac773bbe7eed6efa170962d8 | 2021-09-02T01:41:23.000Z | [
"pytorch",
"led",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | xhyi | null | xhyi/distilLED1_08_31_2021_v3 | 2 | null | transformers | 24,891 | Step Training Loss Validation Loss Rouge2 Precision Rouge2 Recall Rouge2 Fmeasure
240 2.513600 3.049892 0.082800 0.102600 0.085700
240 steps |
yazdipour/sparql-qald9-t5-base-2021-10-19_00-15 | cdf9eccce22e52f8d883c3a1264abd8208357e90 | 2021-10-19T00:37:58.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | yazdipour | null | yazdipour/sparql-qald9-t5-base-2021-10-19_00-15 | 2 | null | transformers | 24,892 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: sparql-qald9-t5-base-2021-10-19_00-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sparql-qald9-t5-base-2021-10-19_00-15
This model is a fine-tuned version of [yazdipour/text-to-sparql-t5-base-2021-10-18_16-15](https://huggingface.co/yazdipour/text-to-sparql-t5-base-2021-10-18_16-15) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:----------:|:-----------------------------------------------------------------------------:|:-------:|
| No log | 1.0 | 51 | 1.8998 | 19.0 | 0.3634 | 0.0387 | 0.1963 | 9.9428 | [71.94645844952593, 49.30006086427267, 35.36503683858004, 28.145941921072225] | 0.2294 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
yazdipour/text-to-sparql-t5-base-qald9 | a7e342f37e85c9791a29c70445d069002a0fccbc | 2021-10-19T23:25:20.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | yazdipour | null | yazdipour/text-to-sparql-t5-base-qald9 | 2 | null | transformers | 24,893 | ---
tags:
- generated_from_trainer
model-index:
- name: sparql-qald9-t5-base-2021-10-19_23-02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sparql-qald9-t5-base-2021-10-19_23-02
This model is a fine-tuned version of [yazdipour/text-to-sparql-t5-base-2021-10-19_15-35_lastDS](https://huggingface.co/yazdipour/text-to-sparql-t5-base-2021-10-19_15-35_lastDS) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:----------:|:-----------------------------------------------------------------------------:|:-------:|
| No log | 1.0 | 51 | 1.8300 | 19.0 | 0.3640 | 0.0346 | 0.1943 | 10.0358 | [72.88988261598658, 50.27455765710799, 35.93015446608462, 28.454070201643017] | 0.2281 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
yerevann/x-r-hy | dcf85100abbeb865eec31d06bd70ed3d39f8285d | 2021-12-19T03:19:04.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | yerevann | null | yerevann/x-r-hy | 2 | null | transformers | 24,894 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-2b-armenian-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-2b-armenian-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-2b](https://huggingface.co/facebook/wav2vec2-xls-r-2b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5166
- Wer: 0.7397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 120
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 3.7057 | 2.38 | 200 | 0.7731 | 0.8091 |
| 0.5797 | 4.76 | 400 | 0.8279 | 0.7804 |
| 0.4341 | 7.14 | 600 | 1.0343 | 0.8285 |
| 0.3135 | 9.52 | 800 | 1.0551 | 0.8066 |
| 0.2409 | 11.9 | 1000 | 1.0686 | 0.7897 |
| 0.1998 | 14.29 | 1200 | 1.1329 | 0.7766 |
| 0.1729 | 16.67 | 1400 | 1.3234 | 0.8567 |
| 0.1533 | 19.05 | 1600 | 1.2432 | 0.8160 |
| 0.1354 | 21.43 | 1800 | 1.2780 | 0.7954 |
| 0.12 | 23.81 | 2000 | 1.2228 | 0.8054 |
| 0.1175 | 26.19 | 2200 | 1.3484 | 0.8129 |
| 0.1141 | 28.57 | 2400 | 1.2881 | 0.9130 |
| 0.1053 | 30.95 | 2600 | 1.1972 | 0.7910 |
| 0.0954 | 33.33 | 2800 | 1.3702 | 0.8048 |
| 0.0842 | 35.71 | 3000 | 1.3963 | 0.7960 |
| 0.0793 | 38.1 | 3200 | 1.4690 | 0.7991 |
| 0.0707 | 40.48 | 3400 | 1.5045 | 0.8085 |
| 0.0745 | 42.86 | 3600 | 1.4749 | 0.8004 |
| 0.0693 | 45.24 | 3800 | 1.5047 | 0.7960 |
| 0.0646 | 47.62 | 4000 | 1.4216 | 0.7997 |
| 0.0555 | 50.0 | 4200 | 1.4676 | 0.8029 |
| 0.056 | 52.38 | 4400 | 1.4273 | 0.8104 |
| 0.0465 | 54.76 | 4600 | 1.3999 | 0.7841 |
| 0.046 | 57.14 | 4800 | 1.6130 | 0.8473 |
| 0.0404 | 59.52 | 5000 | 1.5586 | 0.7841 |
| 0.0403 | 61.9 | 5200 | 1.3959 | 0.7653 |
| 0.0404 | 64.29 | 5400 | 1.5318 | 0.8041 |
| 0.0365 | 66.67 | 5600 | 1.5300 | 0.7854 |
| 0.0338 | 69.05 | 5800 | 1.5051 | 0.7885 |
| 0.0307 | 71.43 | 6000 | 1.5647 | 0.7935 |
| 0.0235 | 73.81 | 6200 | 1.4919 | 0.8154 |
| 0.0268 | 76.19 | 6400 | 1.5259 | 0.8060 |
| 0.0275 | 78.57 | 6600 | 1.3985 | 0.7897 |
| 0.022 | 80.95 | 6800 | 1.5515 | 0.8154 |
| 0.017 | 83.33 | 7000 | 1.5737 | 0.7647 |
| 0.0205 | 85.71 | 7200 | 1.4876 | 0.7572 |
| 0.0174 | 88.1 | 7400 | 1.6331 | 0.7829 |
| 0.0188 | 90.48 | 7600 | 1.5108 | 0.7685 |
| 0.0134 | 92.86 | 7800 | 1.7125 | 0.7866 |
| 0.0125 | 95.24 | 8000 | 1.6042 | 0.7635 |
| 0.0133 | 97.62 | 8200 | 1.4608 | 0.7478 |
| 0.0272 | 100.0 | 8400 | 1.4784 | 0.7309 |
| 0.0133 | 102.38 | 8600 | 1.4471 | 0.7459 |
| 0.0094 | 104.76 | 8800 | 1.4852 | 0.7272 |
| 0.0103 | 107.14 | 9000 | 1.5679 | 0.7409 |
| 0.0088 | 109.52 | 9200 | 1.5090 | 0.7309 |
| 0.0077 | 111.9 | 9400 | 1.4994 | 0.7290 |
| 0.0068 | 114.29 | 9600 | 1.5008 | 0.7340 |
| 0.0054 | 116.67 | 9800 | 1.5166 | 0.7390 |
| 0.0052 | 119.05 | 10000 | 1.5166 | 0.7397 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
yliu337/sliding_window_token_both_ctx | 5b3313761ff6d78739da5a71ac93f86c72f9b1f1 | 2021-09-26T02:53:54.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | yliu337 | null | yliu337/sliding_window_token_both_ctx | 2 | null | transformers | 24,895 | Note: no filter |
yoshitomo-matsubara/bert-base-uncased-cola_from_bert-large-uncased-cola | daa9fda4e73eef54bb9b21fa630a7cdc844c382b | 2021-06-03T05:00:03.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:cola",
"transformers",
"cola",
"glue",
"kd",
"torchdistill",
"license:apache-2.0"
] | text-classification | false | yoshitomo-matsubara | null | yoshitomo-matsubara/bert-base-uncased-cola_from_bert-large-uncased-cola | 2 | null | transformers | 24,896 | ---
language: en
tags:
- bert
- cola
- glue
- kd
- torchdistill
license: apache-2.0
datasets:
- cola
metrics:
- matthew's correlation
---
`bert-base-uncased` fine-tuned on CoLA dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation.
The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/cola/kd/bert_base_uncased_from_bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
|
yoshitomo-matsubara/bert-large-uncased-cola | 9fd912f70d5b0dfc74cb3c7833dd46e868bb3d16 | 2021-05-29T21:32:06.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:cola",
"transformers",
"cola",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | false | yoshitomo-matsubara | null | yoshitomo-matsubara/bert-large-uncased-cola | 2 | null | transformers | 24,897 | ---
language: en
tags:
- bert
- cola
- glue
- torchdistill
license: apache-2.0
datasets:
- cola
metrics:
- matthew's correlation
---
`bert-large-uncased` fine-tuned on CoLA dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/cola/ce/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
|
youzanai/bert-product-comment-chinese | 352a6c7f9e04168ae7029d4d156c92706da85bf6 | 2022-03-21T02:42:40.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | youzanai | null | youzanai/bert-product-comment-chinese | 2 | 2 | transformers | 24,898 | 基于有赞商品评论语料训练的bert模型。
模型示例代码参考 https://github.com/youzanai/trexpark |
ytlin/1klqb7u9_35 | a4bfc53fd00ead57eef8e49352258690e876ff33 | 2021-05-23T13:48:32.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | ytlin | null | ytlin/1klqb7u9_35 | 2 | null | transformers | 24,899 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.