modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ji-xin/roberta_base-QQP-two_stage | 1f48ad1677737c7119affc8b0fc358e15d52fa08 | 2020-07-08T15:07:16.000Z | [
"pytorch",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ji-xin | null | ji-xin/roberta_base-QQP-two_stage | 2 | null | transformers | 24,300 | Entry not found |
ji-xin/roberta_base-RTE-two_stage | d4b01b4bb75fe84ca175e2ad35090791ce076022 | 2020-07-08T15:08:42.000Z | [
"pytorch",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ji-xin | null | ji-xin/roberta_base-RTE-two_stage | 2 | null | transformers | 24,301 | Entry not found |
ji-xin/roberta_large-SST2-two_stage | 9a67b2c966ed18e72acda5ee4c835893832d42a3 | 2020-07-07T20:25:04.000Z | [
"pytorch",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ji-xin | null | ji-xin/roberta_large-SST2-two_stage | 2 | null | transformers | 24,302 | Entry not found |
jiho0304/curseELECTRA | ee72aa7df1f77b72626d63d5c7f8c8db7c8d2490 | 2021-12-21T08:51:53.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | jiho0304 | null | jiho0304/curseELECTRA | 2 | null | transformers | 24,303 | ElectraBERT tuned with korean-bad-speeches |
jihopark/colloquialV2 | 7a3b0e98c67e7360e813fcf114ed9f7e30643473 | 2021-05-23T05:55:26.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jihopark | null | jihopark/colloquialV2 | 2 | null | transformers | 24,304 | Entry not found |
jimmyliao/distilbert-base-uncased-finetuned-cola | 63074248caaecfdc20bd2e3adc2a93d68c0f5291 | 2021-12-11T01:27:10.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jimmyliao | null | jimmyliao/distilbert-base-uncased-finetuned-cola | 2 | null | transformers | 24,305 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.541356878970505
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8394
- Matthews Correlation: 0.5414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5259 | 1.0 | 535 | 0.5429 | 0.4064 |
| 0.342 | 2.0 | 1070 | 0.5270 | 0.5081 |
| 0.234 | 3.0 | 1605 | 0.6115 | 0.5268 |
| 0.1703 | 4.0 | 2140 | 0.7344 | 0.5387 |
| 0.1283 | 5.0 | 2675 | 0.8394 | 0.5414 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.8.0+cpu
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jimregan/electra-base-irish-cased-discriminator-v1-finetuned-ner | 95c2b8636fa429e40ef79aeb203dda03a4231aa6 | 2021-12-01T20:37:45.000Z | [
"pytorch",
"tensorboard",
"electra",
"token-classification",
"ga",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"irish",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | jimregan | null | jimregan/electra-base-irish-cased-discriminator-v1-finetuned-ner | 2 | null | transformers | 24,306 | ---
license: apache-2.0
language: ga
tags:
- generated_from_trainer
- irish
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: electra-base-irish-cased-discriminator-v1-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: ga
metrics:
- name: Precision
type: precision
value: 0.5413922859830668
- name: Recall
type: recall
value: 0.5161434977578475
- name: F1
type: f1
value: 0.5284664830119375
- name: Accuracy
type: accuracy
value: 0.8419817960026273
widget:
- text: "Saolaíodh Pádraic Ó Conaire i nGaillimh sa bhliain 1882."
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-irish-cased-discriminator-v1-finetuned-ner
This model is a fine-tuned version of [DCU-NLP/electra-base-irish-cased-generator-v1](https://huggingface.co/DCU-NLP/electra-base-irish-cased-generator-v1) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6654
- Precision: 0.5414
- Recall: 0.5161
- F1: 0.5285
- Accuracy: 0.8420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 63 | 1.3231 | 0.1046 | 0.0417 | 0.0596 | 0.5449 |
| No log | 2.0 | 126 | 0.9710 | 0.3879 | 0.3359 | 0.3600 | 0.7486 |
| No log | 3.0 | 189 | 0.7723 | 0.4713 | 0.4457 | 0.4582 | 0.8152 |
| No log | 4.0 | 252 | 0.6892 | 0.5257 | 0.4910 | 0.5078 | 0.8347 |
| No log | 5.0 | 315 | 0.6654 | 0.5414 | 0.5161 | 0.5285 | 0.8420 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jinbbong/esg-electra-kor-v2 | e9de9ca5336fdaeb306dcfcd6f4fcb249fd976d4 | 2021-08-15T08:57:31.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jinbbong | null | jinbbong/esg-electra-kor-v2 | 2 | null | transformers | 24,307 | Entry not found |
jinbbong/kobart-esg-e5-b32-v2 | 5f75adad5bb5927c30edcc731dd1e3676d0a7601 | 2021-11-02T05:03:06.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jinbbong | null | jinbbong/kobart-esg-e5-b32-v2 | 2 | null | transformers | 24,308 | Entry not found |
jinbbong/kobert-esg-e5-b32-v2 | 5101027ebd11da594f969a9a24e3e7f7dbf66ecb | 2021-09-27T03:27:08.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jinbbong | null | jinbbong/kobert-esg-e5-b32-v2 | 2 | null | transformers | 24,309 | Entry not found |
jinmang2/klue-roberta-large-bt-tapt | 47c0e721d8069ae6b7011c2e2b7944af21d69b71 | 2021-07-20T07:38:45.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jinmang2 | null | jinmang2/klue-roberta-large-bt-tapt | 2 | null | transformers | 24,310 | Entry not found |
jinmang2/pororo-roberta-base-mrc | 1f878a8bafaf3e147005b4747ef3335aeb54db91 | 2021-10-31T15:47:32.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | jinmang2 | null | jinmang2/pororo-roberta-base-mrc | 2 | null | transformers | 24,311 | Entry not found |
jiobiala24/wav2vec2-base-checkpoint-5 | 7802124dbda8224f09671367d62cbb8a2d622128 | 2022-01-16T10:56:18.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-checkpoint-5 | 2 | null | transformers | 24,312 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-5
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-4](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-4) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9849
- Wer: 0.3354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3947 | 1.96 | 1000 | 0.5749 | 0.3597 |
| 0.2856 | 3.93 | 2000 | 0.6212 | 0.3479 |
| 0.221 | 5.89 | 3000 | 0.6280 | 0.3502 |
| 0.1755 | 7.86 | 4000 | 0.6517 | 0.3526 |
| 0.1452 | 9.82 | 5000 | 0.7115 | 0.3481 |
| 0.1256 | 11.79 | 6000 | 0.7687 | 0.3509 |
| 0.1117 | 13.75 | 7000 | 0.7785 | 0.3490 |
| 0.0983 | 15.72 | 8000 | 0.8115 | 0.3442 |
| 0.0877 | 17.68 | 9000 | 0.8290 | 0.3429 |
| 0.0799 | 19.65 | 10000 | 0.8517 | 0.3412 |
| 0.0733 | 21.61 | 11000 | 0.9370 | 0.3448 |
| 0.066 | 23.58 | 12000 | 0.9157 | 0.3410 |
| 0.0623 | 25.54 | 13000 | 0.9673 | 0.3377 |
| 0.0583 | 27.5 | 14000 | 0.9804 | 0.3348 |
| 0.0544 | 29.47 | 15000 | 0.9849 | 0.3354 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jiobiala24/wav2vec2-base-checkpoint-6 | 55d3858ec4c3c0bd227715180eafab775eb47b31 | 2022-01-17T14:22:20.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-checkpoint-6 | 2 | null | transformers | 24,313 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-6
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-5](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-5) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9738
- Wer: 0.3323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3435 | 1.82 | 1000 | 0.5637 | 0.3419 |
| 0.2599 | 3.65 | 2000 | 0.5804 | 0.3473 |
| 0.2043 | 5.47 | 3000 | 0.6481 | 0.3474 |
| 0.1651 | 7.3 | 4000 | 0.6937 | 0.3452 |
| 0.1376 | 9.12 | 5000 | 0.7221 | 0.3429 |
| 0.118 | 10.95 | 6000 | 0.7634 | 0.3441 |
| 0.105 | 12.77 | 7000 | 0.7789 | 0.3444 |
| 0.0925 | 14.6 | 8000 | 0.8209 | 0.3444 |
| 0.0863 | 16.42 | 9000 | 0.8293 | 0.3440 |
| 0.0756 | 18.25 | 10000 | 0.8553 | 0.3412 |
| 0.0718 | 20.07 | 11000 | 0.9006 | 0.3430 |
| 0.0654 | 21.9 | 12000 | 0.9541 | 0.3458 |
| 0.0605 | 23.72 | 13000 | 0.9400 | 0.3350 |
| 0.0552 | 25.55 | 14000 | 0.9547 | 0.3363 |
| 0.0543 | 27.37 | 15000 | 0.9715 | 0.3348 |
| 0.0493 | 29.2 | 16000 | 0.9738 | 0.3323 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jiobiala24/wav2vec2-base-checkpoint-7.1 | 2237dbe1e9a821284d5dfb8342c1823b41322b73 | 2022-01-21T15:50:15.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-checkpoint-7.1 | 2 | null | transformers | 24,314 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-7.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-7.1
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-6](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-6) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9369
- Wer: 0.3243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3124 | 1.75 | 1000 | 0.5602 | 0.3403 |
| 0.2428 | 3.5 | 2000 | 0.5924 | 0.3431 |
| 0.1884 | 5.24 | 3000 | 0.6161 | 0.3423 |
| 0.1557 | 6.99 | 4000 | 0.6570 | 0.3415 |
| 0.1298 | 8.74 | 5000 | 0.6837 | 0.3446 |
| 0.1141 | 10.49 | 6000 | 0.7304 | 0.3396 |
| 0.1031 | 12.24 | 7000 | 0.7264 | 0.3410 |
| 0.0916 | 13.99 | 8000 | 0.7229 | 0.3387 |
| 0.0835 | 15.73 | 9000 | 0.8078 | 0.3458 |
| 0.0761 | 17.48 | 10000 | 0.8304 | 0.3408 |
| 0.0693 | 19.23 | 11000 | 0.8290 | 0.3387 |
| 0.0646 | 20.98 | 12000 | 0.8593 | 0.3372 |
| 0.0605 | 22.73 | 13000 | 0.8728 | 0.3345 |
| 0.0576 | 24.48 | 14000 | 0.9111 | 0.3297 |
| 0.0529 | 26.22 | 15000 | 0.9247 | 0.3273 |
| 0.0492 | 27.97 | 16000 | 0.9248 | 0.3250 |
| 0.0472 | 29.72 | 17000 | 0.9369 | 0.3243 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jnz/electra-ka-anti-opo | 17979d53c450f29573c5df000919e39e5b31fdd8 | 2021-03-30T14:04:36.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | jnz | null | jnz/electra-ka-anti-opo | 2 | null | transformers | 24,315 | Entry not found |
joaomiguel26/xlm-roberta-6-final | 0be2f4d29b869f61f05387241c076fa090497718 | 2021-12-06T16:19:53.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | joaomiguel26 | null | joaomiguel26/xlm-roberta-6-final | 2 | null | transformers | 24,316 | Entry not found |
joaomiguel26/xlm-roberta-7-final | 2b19d3128d86588484537db2d379dca82704a73b | 2021-12-06T16:09:34.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | joaomiguel26 | null | joaomiguel26/xlm-roberta-7-final | 2 | null | transformers | 24,317 | Entry not found |
joaomiguel26/xlm-roberta-8-final | c5efd133317cada65ee68780ca6839e5cbc9c6af | 2021-12-06T16:22:42.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | joaomiguel26 | null | joaomiguel26/xlm-roberta-8-final | 2 | null | transformers | 24,318 | Entry not found |
joe8zhang/dummy-model3 | fc05349ab8e27caaaac71b66f05a6a1fa329bef6 | 2021-06-24T01:08:51.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | joe8zhang | null | joe8zhang/dummy-model3 | 2 | null | transformers | 24,319 | Entry not found |
jogonba2/bart-JES-cnn_dailymail | 0a5f13af42e3e74143891db3925d44c5fd08d485 | 2021-10-14T02:00:37.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | jogonba2 | null | jogonba2/bart-JES-cnn_dailymail | 2 | null | transformers | 24,320 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-JES-cnn_dailymail
results:
- task:
name: Summarization
type: summarization
metrics:
- name: Rouge1
type: rouge
value: 43.9753
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-JES-cnn_dailymail
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1452
- Rouge1: 43.9753
- Rouge2: 19.7191
- Rougel: 33.6236
- Rougelsum: 41.1683
- Gen Len: 80.1767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 1.2949 | 1.0 | 71779 | 1.2080 | 11.7171 | 3.3284 | 11.3209 | 11.4022 | 20.0 |
| 1.191 | 2.0 | 143558 | 1.1615 | 11.8484 | 3.363 | 11.4175 | 11.5037 | 20.0 |
| 1.0907 | 3.0 | 215337 | 1.1452 | 12.6221 | 3.773 | 12.1226 | 12.2359 | 20.0 |
| 0.9798 | 4.0 | 287116 | 1.1670 | 12.4306 | 3.7329 | 11.9497 | 12.0617 | 20.0 |
| 0.9112 | 5.0 | 358895 | 1.1667 | 12.5404 | 3.7842 | 12.0541 | 12.1643 | 20.0 |
| 0.8358 | 6.0 | 430674 | 1.1997 | 12.5153 | 3.778 | 12.0382 | 12.1332 | 20.0 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1+cu110
- Datasets 1.11.0
- Tokenizers 0.10.3
|
joheras/Mapi | 9010147f20895ebe1da4b834309bd6e8468f5a51 | 2021-07-01T06:09:46.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | joheras | null | joheras/Mapi | 2 | null | transformers | 24,321 | Entry not found |
johnpaulbin/gpt2-skript-80 | 623dad9695dd12173316b2b6dc9873af9fac13ee | 2021-07-16T05:43:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | johnpaulbin | null | johnpaulbin/gpt2-skript-80 | 2 | null | transformers | 24,322 | GPT-2 for the Minecraft Plugin: Skript (80,000 Lines, 3< GB: GPT-2 Large model finetune)
Inferencing Colab: https://colab.research.google.com/drive/1uTAPLa1tuNXFpG0qVLSseMro6iU9-xNc |
jonatasgrosman/bartuque-bart-base-pretrained-mm-2 | 2597247e47519901d59f9b6f9c6899e635775113 | 2021-02-25T23:03:55.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jonatasgrosman | null | jonatasgrosman/bartuque-bart-base-pretrained-mm-2 | 2 | null | transformers | 24,323 | Just a test
|
jonatasgrosman/bartuque-bart-base-pretrained-r-2 | 6672170c95ca3066c0534792e1aeb4af790c086e | 2021-02-04T00:25:56.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jonatasgrosman | null | jonatasgrosman/bartuque-bart-base-pretrained-r-2 | 2 | null | transformers | 24,324 | Just a test
|
jonatasgrosman/bartuque-bart-base-random-r-2 | 9a3343ebf4ec5cc9bfbefbd5e9ab797fafc26ecf | 2021-02-04T00:27:11.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jonatasgrosman | null | jonatasgrosman/bartuque-bart-base-random-r-2 | 2 | null | transformers | 24,325 | Just a test
|
jonfd/electra-small-is-no | 191173e9f29ff06dcea78405b884d171faa2b3fd | 2022-01-31T23:41:45.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"is",
"no",
"dataset:igc",
"dataset:ic3",
"dataset:jonfd/ICC",
"dataset:mc4",
"transformers",
"license:cc-by-4.0"
] | null | false | jonfd | null | jonfd/electra-small-is-no | 2 | null | transformers | 24,326 | ---
language:
- is
- no
license: cc-by-4.0
datasets:
- igc
- ic3
- jonfd/ICC
- mc4
---
# Icelandic-Norwegian ELECTRA-Small
This model was pretrained on the following corpora:
* The [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/) (IGC)
* The Icelandic Common Crawl Corpus (IC3)
* The [Icelandic Crawled Corpus](https://huggingface.co/datasets/jonfd/ICC) (ICC)
* The [Multilingual Colossal Clean Crawled Corpus](https://huggingface.co/datasets/mc4) (mC4) - Icelandic and Norwegian text obtained from .is and .no domains, respectively
The total size of the corpus after document-level deduplication and filtering was 7.41B tokens, split equally between the two languages. The model was trained using a WordPiece tokenizer with a vocabulary size of 64,105 for 1.1 million steps, and otherwise with default settings.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture. |
jonfd/electra-small-nordic | fb032b455f0e64897fbe56d1933afe4a5900dc9c | 2022-01-31T23:41:26.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"is",
"no",
"sv",
"da",
"dataset:igc",
"dataset:ic3",
"dataset:jonfd/ICC",
"dataset:mc4",
"transformers",
"license:cc-by-4.0"
] | null | false | jonfd | null | jonfd/electra-small-nordic | 2 | null | transformers | 24,327 | ---
language:
- is
- no
- sv
- da
license: cc-by-4.0
datasets:
- igc
- ic3
- jonfd/ICC
- mc4
---
# Nordic ELECTRA-Small
This model was pretrained on the following corpora:
* The [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/) (IGC)
* The Icelandic Common Crawl Corpus (IC3)
* The [Icelandic Crawled Corpus](https://huggingface.co/datasets/jonfd/ICC) (ICC)
* The [Multilingual Colossal Clean Crawled Corpus](https://huggingface.co/datasets/mc4) (mC4) - Icelandic, Norwegian, Swedish and Danish text obtained from .is, .no, .se and .dk domains, respectively
The total size of the corpus after document-level deduplication and filtering was 14.82B tokens, split equally between the four languages. The model was trained using a WordPiece tokenizer with a vocabulary size of 96,105 for one million steps with a batch size of 256, and otherwise with default settings.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture. |
jonx18/DialoGPT-small-Creed-Odyssey | 9d1a4f1d1a0159c0fc10a717b91f20947a09a964 | 2021-05-23T06:02:34.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jonx18 | null | jonx18/DialoGPT-small-Creed-Odyssey | 2 | null | transformers | 24,328 | # Summary
The app was conceived with the idea of recreating and generate new dialogs for existing games.
In order to generate a dataset for training the steps followed were:
1. Download from [Assassins Creed Fandom Wiki](https://assassinscreed.fandom.com/wiki/Special:Export) from the category "Memories relived using the Animus HR-8.5".
2. Keep only text elements from XML.
3. Keep only the dialog section.
4. Parse wikimarkup with [wikitextparser](https://pypi.org/project/wikitextparser/).
5. Clean description of dialog's context.
Due to the small size of the dataset obtained, a transfer learning approach was considered based on a pretrained ["Dialog GPT" model](https://huggingface.co/microsoft/DialoGPT-small). |
joaoalvarenga/model-sid-voxforge-cetuc-0 | ea68a8848b59b93805d42ec171fe05ac632c19d2 | 2021-07-06T08:34:35.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | joaoalvarenga | null | joaoalvarenga/model-sid-voxforge-cetuc-0 | 2 | null | transformers | 24,329 | Entry not found |
joaoalvarenga/model-sid-voxforge-cetuc-1 | ed9efe0d5ae67f14976ef10aa2e4ffb4e044e91c | 2021-07-06T08:41:07.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | joaoalvarenga | null | joaoalvarenga/model-sid-voxforge-cetuc-1 | 2 | null | transformers | 24,330 | Entry not found |
joaoalvarenga/model-sid-voxforge-cv-cetuc-0 | c69ca002c640ca087814c51aaa1c06a3ce30609a | 2021-07-06T08:50:10.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"apache-2.0",
"portuguese-speech-corpus",
"xlsr-fine-tuning-week",
"PyTorch",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | joaoalvarenga | null | joaoalvarenga/model-sid-voxforge-cv-cetuc-0 | 2 | null | transformers | 24,331 | ---
language: pt
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- apache-2.0
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- PyTorch
license: apache-2.0
model-index:
- name: JoaoAlvarenga XLSR Wav2Vec2 Large 53 Portuguese A
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice pt
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 15.037146%
---
# Wav2Vec2-Large-XLSR-53-Portuguese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (wer)**: 15.037146%
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-portuguese/blob/main/fine-tuning.py |
joaoalvarenga/model-sid-voxforge-cv-cetuc-1 | 5044b0cffcc449f46dfda0cf8c960a6c6971c3d7 | 2021-07-06T08:54:15.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | joaoalvarenga | null | joaoalvarenga/model-sid-voxforge-cv-cetuc-1 | 2 | null | transformers | 24,332 | Entry not found |
joaoalvarenga/model-sid-voxforge-cv-cetuc-2 | 1d07a3a76bed238ea02674f814222c7c50fcb2e4 | 2021-07-06T09:00:49.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | joaoalvarenga | null | joaoalvarenga/model-sid-voxforge-cv-cetuc-2 | 2 | null | transformers | 24,333 | Entry not found |
joaoalvarenga/wav2vec2-cetuc-sid-voxforge-mls-0 | 47bd59974ed20e83deec2266a53a308d1c56bf87 | 2021-07-06T09:04:12.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | joaoalvarenga | null | joaoalvarenga/wav2vec2-cetuc-sid-voxforge-mls-0 | 2 | null | transformers | 24,334 | Entry not found |
joaoalvarenga/wav2vec2-cetuc-sid-voxforge-mls-1 | e51d5113fc19eae25e86f06dc5f5d121c8c93944 | 2021-07-05T13:42:18.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | joaoalvarenga | null | joaoalvarenga/wav2vec2-cetuc-sid-voxforge-mls-1 | 2 | null | transformers | 24,335 | Entry not found |
joaoalvarenga/wav2vec2-cetuc-sid-voxforge-mls-2 | 6953bc0d961afcce9575d06e88eac04fcda69406 | 2021-07-05T13:27:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | joaoalvarenga | null | joaoalvarenga/wav2vec2-cetuc-sid-voxforge-mls-2 | 2 | null | transformers | 24,336 | Entry not found |
joaoalvarenga/wav2vec2-cv-coral-300ep | b95dd48c713752fe4a112e75cf5304105f3550e1 | 2021-07-12T12:35:13.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | joaoalvarenga | null | joaoalvarenga/wav2vec2-cv-coral-300ep | 2 | null | transformers | 24,337 | Entry not found |
joaoalvarenga/wav2vec2-cv-coral-30ep | de72d61a3cb4d8aa91d7de4c1326e39be939733c | 2021-07-06T09:07:11.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"apache-2.0",
"portuguese-speech-corpus",
"xlsr-fine-tuning-week",
"PyTorch",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | joaoalvarenga | null | joaoalvarenga/wav2vec2-cv-coral-30ep | 2 | 1 | transformers | 24,338 | ---
language: pt
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- apache-2.0
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- PyTorch
license: apache-2.0
model-index:
- name: JoaoAlvarenga XLSR Wav2Vec2 Large 53 Portuguese A
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice pt
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 15.037146%
---
# Wav2Vec2-Large-XLSR-53-Portuguese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (wer)**: 15.037146%
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-portuguese/blob/main/fine-tuning.py |
joaoalvarenga/wav2vec2-large-xlsr-portuguese | b91070a8503b0f6327382210475d6cc214a6e23f | 2021-07-06T09:30:27.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"apache-2.0",
"portuguese-speech-corpus",
"xlsr-fine-tuning-week",
"PyTorch",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | joaoalvarenga | null | joaoalvarenga/wav2vec2-large-xlsr-portuguese | 2 | null | transformers | 24,339 | ---
language: pt
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- apache-2.0
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- PyTorch
license: apache-2.0
model-index:
- name: JoaoAlvarenga XLSR Wav2Vec2 Large 53 Portuguese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice pt
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 13.766801%
---
# Wav2Vec2-Large-XLSR-53-Portuguese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
You need to install Enelvo, an open-source spell correction trained with Twitter user posts
`pip install enelvo`
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from enelvo import normaliser
import re
test_dataset = load_dataset("common_voice", "pt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
norm = normaliser.Normaliser()
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = [norm.normalise(i) for i in processor.batch_decode(pred_ids)]
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (wer)**: 13.766801%
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-portuguese/blob/main/fine-tuning.py
|
josedlhm/new_model | ba7dc235947cb822ae0599b8b9e4f0ac0a917f5f | 2021-11-24T09:00:54.000Z | [
"pytorch",
"openai-gpt",
"text-generation",
"transformers"
] | text-generation | false | josedlhm | null | josedlhm/new_model | 2 | null | transformers | 24,340 | Entry not found |
josephgatto/paint_doctor_description_identification | 606ea89f28ad4e6c984fcb0326f99bdcfe4e76ac | 2021-11-01T23:51:22.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | josephgatto | null | josephgatto/paint_doctor_description_identification | 2 | null | transformers | 24,341 | Entry not found |
joshuacalloway/csc575finalproject | a1ff07feec0852f7f16fcfb289bcb30e2cca0c99 | 2021-03-16T00:46:04.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | joshuacalloway | null | joshuacalloway/csc575finalproject | 2 | null | transformers | 24,342 | |
jp1924/KoBERT_NSMC_TEST | 1ce81ea62b7046ec0d37eb87b356d1a29f0b2a83 | 2022-02-15T07:12:00.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | jp1924 | null | jp1924/KoBERT_NSMC_TEST | 2 | null | transformers | 24,343 | Entry not found |
jroussin/gpt2-ontapdoc-gen | eb6b6d0a023f800ba5e0dde3ab0fef97ecf0cdf4 | 2021-11-18T14:36:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jroussin | null | jroussin/gpt2-ontapdoc-gen | 2 | null | transformers | 24,344 | Entry not found |
jsgao/bert-eli5c-retriever | 65f16a6076220e2aab6a6811459d023bec857ec9 | 2021-12-14T21:09:37.000Z | [
"pytorch",
"bert",
"feature-extraction",
"en",
"dataset:eli5_category",
"transformers",
"license:mit"
] | feature-extraction | false | jsgao | null | jsgao/bert-eli5c-retriever | 2 | null | transformers | 24,345 | ---
language: en
license: MIT
datasets:
- eli5_category
---
Document Retriever model of [ELI5-Category Dataset](https://celeritasml.netlify.app/posts/2021-12-01-eli5c/), need additional projection layer (see GitHub [repo](https://github.com/rexarski/ANLY580-final-project/blob/main/model_deploy/models/eli5c_qa_model.py)) |
ju-bezdek/slovakbert-conll2003-sk-ner | f287da98afb874101fcc3985c14a4c3cf17b29c5 | 2022-01-12T20:37:34.000Z | [
"pytorch",
"dataset:ju-bezdek/conll2003-SK-NER",
"generated_from_trainer",
"license:mit",
"model-index"
] | null | false | ju-bezdek | null | ju-bezdek/slovakbert-conll2003-sk-ner | 2 | null | null | 24,346 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- ju-bezdek/conll2003-SK-NER
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: outputs
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: ju-bezdek/conll2003-SK-NER
type: ju-bezdek/conll2003-SK-NER
args: conll2003-SK-NER
metrics:
- name: Precision
type: precision
value: 0.8189727994593682
- name: Recall
type: recall
value: 0.8389581169955002
- name: F1
type: f1
value: 0.8288450029922203
- name: Accuracy
type: accuracy
value: 0.9526157920337243
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [gerulata/slovakbert](https://huggingface.co/gerulata/slovakbert) on the [ju-bezdek/conll2003-SK-NER](https://huggingface.co/datasets/ju-bezdek/conll2003-SK-NER) dataset.
It achieves the following results on the evaluation (validation) set:
- Loss: 0.1752
- Precision: 0.8190
- Recall: 0.8390
- F1: 0.8288
- Accuracy: 0.9526
## Model description
More information needed
## Code example
```python:
from transformers import pipeline, AutoModel, AutoTokenizer
from spacy import displacy
import os
model_path="ju-bezdek/slovakbert-conll2003-sk-ner"
aggregation_strategy="max"
ner_pipeline = pipeline(task='ner', model=model_path, aggregation_strategy=aggregation_strategy)
input_sentence= "Ruský premiér Viktor Černomyrdin v piatok povedal, že prezident Boris Jeľcin , ktorý je na dovolenke mimo Moskvy , podporil mierový plán šéfa bezpečnosti Alexandra Lebedu pre Čečensko, uviedla tlačová agentúra Interfax"
ner_ents = ner_pipeline(input_sentence)
print(ner_ents)
ent_group_labels = [ner_pipeline.model.config.id2label[i][2:] for i in ner_pipeline.model.config.id2label if i>0]
options = {"ents":ent_group_labels}
dicplacy_ents = [{"start":ent["start"], "end":ent["end"], "label":ent["entity_group"]} for ent in ner_ents]
displacy.render({"text":input_sentence, "ents":dicplacy_ents}, style="ent", options=options, jupyter=True, manual=True)
```
### Result:
<div>
<span class="tex2jax_ignore"><div class="entities" style="line-height: 2.5; direction: ltr">
<mark class="entity" style="background: #ddd; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Ruský
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">MISC</span>
</mark>
premiér
<mark class="entity" style="background: #ddd; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Viktor Černomyrdin
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">PER</span>
</mark>
v piatok povedal, že prezident
<mark class="entity" style="background: #ddd; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Boris Jeľcin,
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">PER</span>
</mark>
, ktorý je na dovolenke mimo
<mark class="entity" style="background: #ff9561; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Moskvy
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">LOC</span>
</mark>
, podporil mierový plán šéfa bezpečnosti
<mark class="entity" style="background: #ddd; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Alexandra Lebedu
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">PER</span>
</mark>
pre
<mark class="entity" style="background: #ff9561; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Čečensko,
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">LOC</span>
</mark>
uviedla tlačová agentúra
<mark class="entity" style="background: #7aecec; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Interfax
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">ORG</span>
</mark>
</div></span>
</div>
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3237 | 1.0 | 878 | 0.2541 | 0.7125 | 0.8059 | 0.7563 | 0.9283 |
| 0.1663 | 2.0 | 1756 | 0.2370 | 0.7775 | 0.8090 | 0.7929 | 0.9394 |
| 0.1251 | 3.0 | 2634 | 0.2289 | 0.7732 | 0.8029 | 0.7878 | 0.9385 |
| 0.0984 | 4.0 | 3512 | 0.2818 | 0.7294 | 0.8189 | 0.7715 | 0.9294 |
| 0.0808 | 5.0 | 4390 | 0.3138 | 0.7615 | 0.7900 | 0.7755 | 0.9326 |
| 0.0578 | 6.0 | 5268 | 0.3072 | 0.7548 | 0.8222 | 0.7871 | 0.9370 |
| 0.0481 | 7.0 | 6146 | 0.2778 | 0.7897 | 0.8156 | 0.8025 | 0.9408 |
| 0.0414 | 8.0 | 7024 | 0.3336 | 0.7695 | 0.8201 | 0.7940 | 0.9389 |
| 0.0268 | 9.0 | 7902 | 0.3294 | 0.7868 | 0.8140 | 0.8002 | 0.9409 |
| 0.0204 | 10.0 | 8780 | 0.3693 | 0.7657 | 0.8239 | 0.7938 | 0.9376 |
| 0.016 | 11.0 | 9658 | 0.3816 | 0.7932 | 0.8242 | 0.8084 | 0.9425 |
| 0.0108 | 12.0 | 10536 | 0.3607 | 0.7929 | 0.8256 | 0.8089 | 0.9431 |
| 0.0078 | 13.0 | 11414 | 0.3980 | 0.7915 | 0.8240 | 0.8074 | 0.9423 |
| 0.0062 | 14.0 | 12292 | 0.4096 | 0.7995 | 0.8247 | 0.8119 | 0.9436 |
| 0.0035 | 15.0 | 13170 | 0.4177 | 0.8006 | 0.8251 | 0.8127 | 0.9438 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
juanhebert/wav2vec2-indonesia | 7438d6018537add750b04b6a28dcddbefa0546be | 2022-02-24T12:34:31.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | juanhebert | null | juanhebert/wav2vec2-indonesia | 2 | null | transformers | 24,347 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-indonesia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-indonesia
This model is a fine-tuned version of [juanhebert/wav2vec2-indonesia](https://huggingface.co/juanhebert/wav2vec2-indonesia) on the commonvoice "id" dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0727
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 2.8744 | 0.68 | 200 | 3.0301 | 1.0 |
| 2.868 | 1.36 | 400 | 3.0727 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
julien-c/policy-distilbert-7d | 41a7c98f1285a7e5ef19095dab11f0ac71ac1406 | 2020-12-26T10:04:20.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | julien-c | null | julien-c/policy-distilbert-7d | 2 | null | transformers | 24,348 | Entry not found |
juliusco/distilbert-base-uncased-finetuned-squad | 48a80e81aef448e5ba67c5df7a10cf26924d2ae8 | 2022-06-13T13:10:17.000Z | [
"pytorch",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | juliusco | null | juliusco/distilbert-base-uncased-finetuned-squad | 2 | null | transformers | 24,349 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1755 | 1.0 | 11066 | 1.1177 |
| 0.9004 | 2.0 | 22132 | 1.1589 |
| 0.6592 | 3.0 | 33198 | 1.2326 |
| 0.4823 | 4.0 | 44264 | 1.3672 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
junnyu/autobert-small-light | 100ecad4a3bd4cc26d74a4002565aac4ccb58599 | 2021-08-02T13:50:03.000Z | [
"pytorch",
"autobert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | junnyu | null | junnyu/autobert-small-light | 2 | null | transformers | 24,350 | Entry not found |
junnyu/eHealth_pytorch | 98f67e85f254c6bd05505f8036561df80b3bda5b | 2022-01-13T10:29:01.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | junnyu | null | junnyu/eHealth_pytorch | 2 | null | transformers | 24,351 | https://github.com/PaddlePaddle/Research/tree/master/KG/eHealth |
junzai/bert_test | cdc00fb4bfe0c5c4e4e626f3937a59ef64d482b0 | 2021-07-21T00:59:17.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | junzai | null | junzai/bert_test | 2 | null | transformers | 24,352 | Entry not found |
jx88/xlm-roberta-base-finetuned-marc-en-j-run | d708279913480f4db1f69e5419d1d416ec6824bf | 2021-10-23T03:13:16.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | jx88 | null | jx88/xlm-roberta-base-finetuned-marc-en-j-run | 2 | null | transformers | 24,353 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en-j-run
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en-j-run
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9189
- Mae: 0.4634
## Model description
Trained following the MLT Tokyo Transformers workshop run by huggingface.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.2327 | 1.0 | 235 | 1.0526 | 0.6341 |
| 0.9943 | 2.0 | 470 | 0.9189 | 0.4634 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
kamalkraj/bioelectra-base-discriminator-pubmed-pmc | 1f11d75b84a57ab99b19b74ec2c00d3d33551496 | 2021-06-10T13:45:44.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | kamalkraj | null | kamalkraj/bioelectra-base-discriminator-pubmed-pmc | 2 | null | transformers | 24,354 | ## BioELECTRA:Pretrained Biomedical text Encoder using Discriminators
Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset.
For a detailed description and experimental results, please refer to our paper [BioELECTRA:Pretrained Biomedical text Encoder using Discriminators](https://www.aclweb.org/anthology/2021.bionlp-1.16/).
## How to use the discriminator in `transformers`
```python
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed")
tokenizer = ElectraTokenizerFast.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed")
sentence = "The quick brown fox jumps over the lazy dog"
fake_sentence = "The quick brown fox fake over the lazy dog"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % int(prediction), end="") for prediction in predictions[0].tolist()]
``` |
kangnichaluo/cb | c77eda357c1e977faf767f185c8ad36244e55bfa | 2021-05-30T12:29:10.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | kangnichaluo | null | kangnichaluo/cb | 2 | null | transformers | 24,355 | learning rate: 5e-5
training epochs: 5
batch size: 8
seed: 42
model: bert-base-uncased
trained on CB which is converted into two-way nli classification (predict entailment or not-entailment class) |
kangnichaluo/mnli-1 | 19afea85b542d7cf4695f545750d170c648a72eb | 2021-05-25T11:36:25.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | kangnichaluo | null | kangnichaluo/mnli-1 | 2 | null | transformers | 24,356 | learning rate: 2e-5
training epochs: 3
batch size: 64
seed: 42
model: bert-base-uncased
trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class) |
kangnichaluo/mnli-3 | 5d9e7bb8612df5ff95c485034b0b64aa534acdc7 | 2021-05-25T11:46:40.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | kangnichaluo | null | kangnichaluo/mnli-3 | 2 | null | transformers | 24,357 | learning rate: 2e-5
training epochs: 3
batch size: 64
seed: 13
model: bert-base-uncased
trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class) |
kangnichaluo/mnli-4 | 570e51258bd95747d9588662f232a95034ce7a65 | 2021-05-25T12:36:39.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | kangnichaluo | null | kangnichaluo/mnli-4 | 2 | null | transformers | 24,358 | learning rate: 2e-5
training epochs: 3
batch size: 64
seed: 87
model: bert-base-uncased
trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class) |
kangnichaluo/mnli-5 | 48487c7d1c2bda8826105d459b86127dc4783985 | 2021-05-25T12:41:28.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | kangnichaluo | null | kangnichaluo/mnli-5 | 2 | null | transformers | 24,359 | learning rate: 2e-5
training epochs: 3
batch size: 64
seed: 111
model: bert-base-uncased
trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class) |
kangnichaluo/mnli-cb | 9329fa10c18614c42c9826f2abf2743eb43d4d00 | 2021-05-30T12:29:33.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | kangnichaluo | null | kangnichaluo/mnli-cb | 2 | null | transformers | 24,360 | learning rate: 3e-5
training epochs: 5
batch size: 8
seed: 42
model: bert-base-uncased
The model is pretrained on MNLI (we use kangnichaluo/mnli-2 directly) and then finetuned on CB which is converted into two-way nli classification (predict entailment or not-entailment class) |
kaushikacharya/dummy-model | fd305a2fb53109c4cede2d289f6b51d19c26728a | 2021-08-21T15:26:32.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | kaushikacharya | null | kaushikacharya/dummy-model | 2 | null | transformers | 24,361 | Entry not found |
kevinzyz/chinese-bert-wwm-ext-finetuned-cola-e3 | 250007853fa5abcfd8c2a5a03c6291c2ca2b792a | 2021-11-20T04:13:18.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | kevinzyz | null | kevinzyz/chinese-bert-wwm-ext-finetuned-cola-e3 | 2 | null | transformers | 24,362 | Entry not found |
khanhpd2/distilBERT-emotionv2 | ccf81702193b2d3f545f93d434eac3b52871bb8c | 2021-11-25T13:15:29.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | khanhpd2 | null | khanhpd2/distilBERT-emotionv2 | 2 | null | transformers | 24,363 | Entry not found |
khanhpd2/distilbert-emotion | 40994a61f4b77cd008daf7d5fb06ff6f49389d59 | 2021-11-25T11:12:38.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | khanhpd2 | null | khanhpd2/distilbert-emotion | 2 | null | transformers | 24,364 | Entry not found |
khizon/bert-unreliable-news-eng-title | 1eca62fc68aa31083f8ac2d77e705a4b41212858 | 2022-01-14T01:20:35.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | khizon | null | khizon/bert-unreliable-news-eng-title | 2 | null | transformers | 24,365 | Entry not found |
kika2000/wav2vec2-large-xls-r-300m-test_my-colab | c7193efc0a9d288316b2e0d0c435152ab063e3c6 | 2022-01-31T10:04:12.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | kika2000 | null | kika2000/wav2vec2-large-xls-r-300m-test_my-colab | 2 | null | transformers | 24,366 | Entry not found |
kingabzpro/wav2vec2-large-xls-r-300m-Swedish | eccc472603378b0e28ac1503bef6e392de6e5604 | 2022-03-24T11:58:19.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | kingabzpro | null | kingabzpro/wav2vec2-large-xls-r-300m-Swedish | 2 | 1 | transformers | 24,367 | ---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
- cer
model-index:
- name: wav2vec2-xls-r-300m-swedish
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice sv-SE
args: sv-SE
metrics:
- type: wer
value: 24.73
name: Test WER
args:
- learning_rate: 7.5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
- type: cer
value: 7.58
name: Test CER
args:
- learning_rate: 7.5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-Swedish
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3641
- Wer: 0.2473
- Cer: 0.0758
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 6.1097 | 5.49 | 500 | 3.1422 | 1.0 | 1.0 |
| 2.985 | 10.98 | 1000 | 1.7357 | 0.9876 | 0.4125 |
| 1.0363 | 16.48 | 1500 | 0.4773 | 0.3510 | 0.1047 |
| 0.6111 | 21.97 | 2000 | 0.3937 | 0.2998 | 0.0910 |
| 0.4942 | 27.47 | 2500 | 0.3779 | 0.2776 | 0.0844 |
| 0.4421 | 32.96 | 3000 | 0.3745 | 0.2630 | 0.0807 |
| 0.4018 | 38.46 | 3500 | 0.3685 | 0.2553 | 0.0781 |
| 0.3759 | 43.95 | 4000 | 0.3618 | 0.2488 | 0.0761 |
| 0.3646 | 49.45 | 4500 | 0.3641 | 0.2473 | 0.0758 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
kingabzpro/wav2vec2-large-xlsr-53-wolof | da78be635d3b398916867ceb704dfac3dd413d76 | 2021-07-06T09:36:05.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"WOLOF",
"dataset:AI4D Baamtu Datamation - Automatic Speech Recognition in WOLOF",
"transformers",
"speech",
"audio",
"license:apache-2.0"
] | automatic-speech-recognition | false | kingabzpro | null | kingabzpro/wav2vec2-large-xlsr-53-wolof | 2 | 1 | transformers | 24,368 | ---
language: WOLOF
datasets:
- AI4D Baamtu Datamation - Automatic Speech Recognition in WOLOF
tags:
- speech
- audio
- automatic-speech-recognition
license: apache-2.0
metrics:
- WER
---
## Evaluation on WOLOF Test
[](https://github.com/kingabzpro/WOLOF-ASR-Wav2Vec2)
```python
import pandas as pd
from datasets import load_dataset, load_metric,Dataset
from tqdm import tqdm
import torch
import soundfile as sf
import torchaudio
from transformers import Wav2Vec2ForCTC
from transformers import Wav2Vec2Processor
from transformers import Wav2Vec2FeatureExtractor
from transformers import Wav2Vec2CTCTokenizer
model_name = "kingabzpro/wav2vec2-large-xlsr-53-wolof"
device = "cuda"
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
val =pd.read_csv("../input/automatic-speech-recognition-in-wolof/Test.csv")
val["path"] = "../input/automatic-speech-recognition-in-wolof/Noise Removed/tmp/WOLOF_ASR_dataset/noise_remove/"+val["ID"]+".wav"
val.rename(columns = {'transcription':'sentence'}, inplace = True)
common_voice_val = Dataset.from_pandas(val)
def speech_file_to_array_fn_test(batch):
speech_array, sampling_rate = sf.read(batch["path"])#(.wav) 16000 sample rate
batch["speech"] = speech_array
batch["sampling_rate"] = sampling_rate
return batch
def prepare_dataset_test(batch):
# check that all files have the correct sampling rate
assert (
len(set(batch["sampling_rate"])) == 1
), f"Make sure all inputs have the same sampling rate of {processor.feature_extractor.sampling_rate}."
batch["input_values"] = processor(batch["speech"], padding=True,sampling_rate=batch["sampling_rate"][0]).input_values
return batch
common_voice_val = common_voice_val.remove_columns([ "ID","age", "down_votes", "gender", "up_votes"]) # Remove columns
common_voice_val = common_voice_val.map(speech_file_to_array_fn_test, remove_columns=common_voice_val.column_names)# Applying speech_file_to_array function
common_voice_val = common_voice_val.map(prepare_dataset_test, remove_columns=common_voice_val.column_names, batch_size=8, num_proc=4, batched=True)# Applying prepare_dataset_test function
final_pred = []
for i in tqdm(range(common_voice_val.shape[0])):# Testing model on Wolof Dataset
input_dict = processor(common_voice_val[i]["input_values"], return_tensors="pt", padding=True)
logits = model(input_dict.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)[0]
prediction = processor.decode(pred_ids)
final_pred.append(prediction)
```
You can check my result on [Zindi](https://zindi.africa/competitions/ai4d-baamtu-datamation-automatic-speech-recognition-in-wolof/leaderboard), I got 8th rank in AI4D Baamtu Datamation - Automatic Speech Recognition in WOLOF
**Result**: 7.88 % |
kipiiler/Rickbot | 366ef8e98393ea46d59df511f5c870aee54b34e7 | 2021-09-15T18:30:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | kipiiler | null | kipiiler/Rickbot | 2 | null | transformers | 24,369 | ---
tags:
- conversational
---
#RickSanChez |
kloon99/KML_Eula_generate_v2 | 215e1b72b08ee9695033afea18c8626f1c7bc2a2 | 2022-02-08T07:06:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | kloon99 | null | kloon99/KML_Eula_generate_v2 | 2 | null | transformers | 24,370 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: trained_model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trained_model2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.1
- Datasets 1.14.0
- Tokenizers 0.10.3
|
koala/xlm-roberta-large-en | d70b003888d3496cfe379077a090cd8837a357f7 | 2021-12-06T18:11:52.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | koala | null | koala/xlm-roberta-large-en | 2 | null | transformers | 24,371 | Entry not found |
koala/xlm-roberta-large-ko | 2a3a172d57e74aebfb15d154e96e92e6856e80ab | 2021-12-10T08:02:32.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | koala | null | koala/xlm-roberta-large-ko | 2 | null | transformers | 24,372 | Entry not found |
kongkeaouch/wav2vec2-xls-r-300m-kh | 3041875f7e12739d3c5cbdb181fa9e624750e04e | 2022-01-21T20:50:59.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | kongkeaouch | null | kongkeaouch/wav2vec2-xls-r-300m-kh | 2 | null | transformers | 24,373 | Testing Khmer ASR baseline. |
korca/meaning-match-roberta-large | 236f6323aa65b96b964d770cf751930048ed2b24 | 2021-11-18T17:54:44.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | korca | null | korca/meaning-match-roberta-large | 2 | null | transformers | 24,374 | Entry not found |
korca/textfooler-roberta-base-mrpc-5 | aef073c48c1c23f4046ed33c84510454325f37f5 | 2022-02-04T18:39:44.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | korca | null | korca/textfooler-roberta-base-mrpc-5 | 2 | null | transformers | 24,375 | Entry not found |
kornesh/roberta-large-wechsel-hindi | 410ddb62cc01660b0577723703230bab40a7050a | 2021-11-14T04:38:43.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | kornesh | null | kornesh/roberta-large-wechsel-hindi | 2 | null | transformers | 24,376 | Entry not found |
kornwtp/sup-consert-base | 19b23ba17b2fee5785c91a3b6e8a3c7712d30ce8 | 2021-12-25T05:51:29.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | kornwtp | null | kornwtp/sup-consert-base | 2 | null | transformers | 24,377 | Entry not found |
krirk/wav2vec2-large-xls-r-300m-turkish-colab | 87b6460f556056d82b434caa747e00a5ca935595 | 2022-01-26T12:38:32.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | krirk | null | krirk/wav2vec2-large-xls-r-300m-turkish-colab | 2 | null | transformers | 24,378 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3942
- Wer: 0.3149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9921 | 3.67 | 400 | 0.7820 | 0.7857 |
| 0.4496 | 7.34 | 800 | 0.4630 | 0.4977 |
| 0.2057 | 11.01 | 1200 | 0.4293 | 0.4627 |
| 0.1328 | 14.68 | 1600 | 0.4464 | 0.4068 |
| 0.1009 | 18.35 | 2000 | 0.4461 | 0.3742 |
| 0.0794 | 22.02 | 2400 | 0.4328 | 0.3467 |
| 0.0628 | 25.69 | 2800 | 0.4036 | 0.3263 |
| 0.0497 | 29.36 | 3200 | 0.3942 | 0.3149 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
kwang2049/TSDAE-twitterpara2nli_stsb | 62f18ce66898810a2fb174544a69ecc0960a3181 | 2021-10-25T16:14:49.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"transformers"
] | feature-extraction | false | kwang2049 | null | kwang2049/TSDAE-twitterpara2nli_stsb | 2 | null | transformers | 24,379 | # kwang2049/TSDAE-twitterpara2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain twitterpara. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on twitterpara with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'twitterpara'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'twitterpara'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
``` |
kyo/distilbert-base-uncased-finetuned-imdb | 12c46d6473b9b6af159cd3f3c95ee4855ea03d1e | 2021-12-09T15:29:34.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | kyo | null | kyo/distilbert-base-uncased-finetuned-imdb | 2 | null | transformers | 24,380 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.707 | 1.0 | 157 | 2.4883 |
| 2.572 | 2.0 | 314 | 2.4240 |
| 2.5377 | 3.0 | 471 | 2.4355 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
lagodw/plotly_gpt_neo_1_3B | 4b3ae52a273718abaa9cbe7a6fb8f514a9b6c86e | 2021-10-14T22:24:28.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | lagodw | null | lagodw/plotly_gpt_neo_1_3B | 2 | null | transformers | 24,381 | Entry not found |
lagodw/redditbot_gpt2_short | ecac7a3b8c4e1181925113427dfe7f0d3b9455ed | 2021-09-27T13:30:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | lagodw | null | lagodw/redditbot_gpt2_short | 2 | null | transformers | 24,382 | Entry not found |
laxya007/gpt2_BSA_Leg_ipr_OE | ab0c5c37a0e0f5bc6c992ce6f254bc04f098e7df | 2021-06-10T16:10:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | laxya007 | null | laxya007/gpt2_BSA_Leg_ipr_OE | 2 | null | transformers | 24,383 | Entry not found |
laxya007/gpt2_TS_DM_AS_CC_TM | 959eceb6b2663d407d380638545f3e81985f7778 | 2021-05-23T07:14:50.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | laxya007 | null | laxya007/gpt2_TS_DM_AS_CC_TM | 2 | null | transformers | 24,384 | Entry not found |
leeeki/bigbird-bart-base | d2294a92b8fed9040ff95464ce174914fb375cb2 | 2021-12-18T19:28:57.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | leeeki | null | leeeki/bigbird-bart-base | 2 | null | transformers | 24,385 | Entry not found |
lewtun/litmetnet-test-01 | dadb1dd5c6884ad2e6050f30cf9cd58da8f6a6ef | 2021-09-14T10:04:03.000Z | [
"pytorch",
"transformers",
"satflow",
"forecasting",
"timeseries",
"remote-sensing",
"license:mit"
] | null | false | lewtun | null | lewtun/litmetnet-test-01 | 2 | null | transformers | 24,386 | ---
license: mit
tags:
- satflow
- forecasting
- timeseries
- remote-sensing
---
# LitMetNet
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
|
lewtun/metnet-test-with-config | 63623df16f75303a96b5e11fd7b81cfd6e7947e2 | 2021-09-06T10:23:45.000Z | [
"pytorch",
"transformers"
] | null | false | lewtun | null | lewtun/metnet-test-with-config | 2 | null | transformers | 24,387 | Entry not found |
lewtun/mt5-small-finetuned-mlsum | 4340696e79add49c32416ba77762a6e0a68a3341 | 2021-09-25T09:43:37.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:mlsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | lewtun | null | lewtun/mt5-small-finetuned-mlsum | 2 | null | transformers | 24,388 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mlsum
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-mlsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: mlsum
type: mlsum
args: es
metrics:
- name: Rouge1
type: rouge
value: 1.1475
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-mlsum
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the mlsum dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 1.1475
- Rouge2: 0.1284
- Rougel: 1.0634
- Rougelsum: 1.0778
- Gen Len: 3.7939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| nan | 1.0 | 808 | nan | 1.1475 | 0.1284 | 1.0634 | 1.0778 | 3.7939 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
lewtun/roberta-base-bne-finetuned-amazon_reviews_multi-finetuned-amazon_reviews_multi | bce3e6e954d4dd93ca86dab147a0f3929f0daef3 | 2021-08-22T18:59:30.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer"
] | text-classification | false | lewtun | null | lewtun/roberta-base-bne-finetuned-amazon_reviews_multi-finetuned-amazon_reviews_multi | 2 | null | transformers | 24,389 | ---
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model_index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metric:
name: Accuracy
type: accuracy
value: 0.9285
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi-finetuned-amazon_reviews_multi
This model was trained from scratch on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3595
- Accuracy: 0.9285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.103 | 1.0 | 1250 | 0.2864 | 0.928 |
| 0.0407 | 2.0 | 2500 | 0.3595 | 0.9285 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
lf/lf_model_01 | 039e061eaad41b3cee3771eaf21fc95dfe825ff7 | 2022-02-11T07:32:22.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | lf | null | lf/lf_model_01 | 2 | null | transformers | 24,390 | Entry not found |
lgris/bp-cetuc100-xlsr | fa60b202685c3603aad251842c1d4cc900a9587f | 2021-11-27T21:05:35.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"transformers",
"audio",
"speech",
"portuguese-speech-corpus",
"PyTorch",
"license:apache-2.0"
] | automatic-speech-recognition | false | lgris | null | lgris/bp-cetuc100-xlsr | 2 | null | transformers | 24,391 | ---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
---
# cetuc100-xlsr: Wav2vec 2.0 with CETUC Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz) dataset. This dataset contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | 94h | -- | 5.4h |
| Common Voice | | -- | 9.5h |
| LaPS BM | | -- | 0.1h |
| MLS | | -- | 3.7h |
| Multilingual TEDx (Portuguese) | | -- | 1.8h |
| SID | | -- | 1.0h |
| VoxForge | | -- | 0.1h |
| Total | | -- | 21.6h |
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| cetuc\_100 (demonstration below)| 0.446 | 0.856 | 0.089 | 0.967 | 1.172 | 0.929 | 0.902 | 0.765 |
| cetuc\_100 + 4-gram (demonstration below)|0.339 | 0.734 | 0.076 | 0.961 | 1.188 | 1.227 | 0.801 | 0.760 |
## Demonstration
```python
MODEL_NAME = "lgris/cetuc100-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.44677581829220825
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.8561919899139065
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.08955808080808081
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.9670008790979718
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 1.1723738343632861
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.929976436317539
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.9020183982683985
### Tests with LM
```python
# !find -type f -name "*.wav" -delete
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.3396346663354827
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.7341013242719512
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.07612373737373737
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.960908940243212
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 1.188118540533579
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 1.2271077178339618
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.800196158008658
|
lgris/distilxlsr_bp_12-16 | 822ea8ac19791a891ad3c43e17d29e95c3847402 | 2021-12-30T00:37:12.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"pt",
"arxiv:2110.01900",
"transformers",
"speech",
"license:apache-2.0"
] | feature-extraction | false | lgris | null | lgris/distilxlsr_bp_12-16 | 2 | null | transformers | 24,392 | ---
language: pt
tags:
- speech
license: apache-2.0
---
# DistilXLSR-53 for BP
[DistilXLSR-53 for BP: DistilHuBERT applied to Wav2vec XLSR-53 for Brazilian Portuguese](https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
Paper: [DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT](https://arxiv.org/abs/2110.01900)
Authors: Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee
**Note 2**: The XLSR-53 model was distilled using [Brazilian Portuguese Datasets](https://huggingface.co/lgris/bp400-xlsr) for test purposes. The dataset is quite small to perform such task (the performance might not be so good as the [original work](https://arxiv.org/abs/2110.01900)).
**Abstract**
Self-supervised speech representation learning methods like wav2vec 2.0 and Hidden-unit BERT (HuBERT) leverage unlabeled speech data for pre-training and offer good representations for numerous speech processing tasks. Despite the success of these methods, they require large memory and high pre-training costs, making them inaccessible for researchers in academia and small companies. Therefore, this paper introduces DistilHuBERT, a novel multi-task learning framework to distill hidden representations from a HuBERT model directly. This method reduces HuBERT's size by 75% and 73% faster while retaining most performance in ten different tasks. Moreover, DistilHuBERT required little training time and data, opening the possibilities of pre-training personal and on-device SSL models for speech.
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model.
|
lgris/wav2vec2-xls-r-300m-gn-cv8-3 | 67d6fa4c4fbe4425f8271ab0fdd5748b0c2c8f2e | 2022-03-24T11:53:57.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gn",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lgris | null | lgris/wav2vec2-xls-r-300m-gn-cv8-3 | 2 | null | transformers | 24,393 | ---
language:
- gn
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- gn
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-300m-gn-cv8-3
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: gn
metrics:
- name: Test WER
type: wer
value: 76.68
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-gn-cv8-3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9517
- Wer: 0.8542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 19.9125 | 5.54 | 100 | 5.4279 | 1.0 |
| 3.8031 | 11.11 | 200 | 3.3070 | 1.0 |
| 3.3783 | 16.65 | 300 | 3.2450 | 1.0 |
| 3.3472 | 22.22 | 400 | 3.2424 | 1.0 |
| 3.2714 | 27.76 | 500 | 3.1100 | 1.0 |
| 3.2367 | 33.32 | 600 | 3.1091 | 1.0 |
| 3.1968 | 38.86 | 700 | 3.1013 | 1.0 |
| 3.2004 | 44.43 | 800 | 3.1173 | 1.0 |
| 3.1656 | 49.97 | 900 | 3.0682 | 1.0 |
| 3.1563 | 55.54 | 1000 | 3.0457 | 1.0 |
| 3.1356 | 61.11 | 1100 | 3.0139 | 1.0 |
| 3.086 | 66.65 | 1200 | 2.8108 | 1.0 |
| 2.954 | 72.22 | 1300 | 2.3238 | 1.0 |
| 2.6125 | 77.76 | 1400 | 1.6461 | 1.0 |
| 2.3296 | 83.32 | 1500 | 1.2834 | 0.9744 |
| 2.1345 | 88.86 | 1600 | 1.1091 | 0.9693 |
| 2.0346 | 94.43 | 1700 | 1.0273 | 0.9233 |
| 1.9611 | 99.97 | 1800 | 0.9642 | 0.9182 |
| 1.9066 | 105.54 | 1900 | 0.9590 | 0.9105 |
| 1.8178 | 111.11 | 2000 | 0.9679 | 0.9028 |
| 1.7799 | 116.65 | 2100 | 0.9007 | 0.8619 |
| 1.7726 | 122.22 | 2200 | 0.9689 | 0.8951 |
| 1.7389 | 127.76 | 2300 | 0.8876 | 0.8593 |
| 1.7151 | 133.32 | 2400 | 0.8716 | 0.8542 |
| 1.6842 | 138.86 | 2500 | 0.9536 | 0.8772 |
| 1.6449 | 144.43 | 2600 | 0.9296 | 0.8542 |
| 1.5978 | 149.97 | 2700 | 0.8895 | 0.8440 |
| 1.6515 | 155.54 | 2800 | 0.9162 | 0.8568 |
| 1.6586 | 161.11 | 2900 | 0.9039 | 0.8568 |
| 1.5966 | 166.65 | 3000 | 0.8627 | 0.8542 |
| 1.5695 | 172.22 | 3100 | 0.9549 | 0.8824 |
| 1.5699 | 177.76 | 3200 | 0.9332 | 0.8517 |
| 1.5297 | 183.32 | 3300 | 0.9163 | 0.8338 |
| 1.5367 | 188.86 | 3400 | 0.8822 | 0.8312 |
| 1.5586 | 194.43 | 3500 | 0.9217 | 0.8363 |
| 1.5429 | 199.97 | 3600 | 0.9564 | 0.8568 |
| 1.5273 | 205.54 | 3700 | 0.9508 | 0.8542 |
| 1.5043 | 211.11 | 3800 | 0.9374 | 0.8542 |
| 1.4724 | 216.65 | 3900 | 0.9622 | 0.8619 |
| 1.4794 | 222.22 | 4000 | 0.9550 | 0.8363 |
| 1.4843 | 227.76 | 4100 | 0.9577 | 0.8465 |
| 1.4781 | 233.32 | 4200 | 0.9543 | 0.8440 |
| 1.4507 | 238.86 | 4300 | 0.9553 | 0.8491 |
| 1.4997 | 244.43 | 4400 | 0.9728 | 0.8491 |
| 1.4371 | 249.97 | 4500 | 0.9543 | 0.8670 |
| 1.4825 | 255.54 | 4600 | 0.9636 | 0.8619 |
| 1.4187 | 261.11 | 4700 | 0.9609 | 0.8440 |
| 1.4363 | 266.65 | 4800 | 0.9567 | 0.8593 |
| 1.4463 | 272.22 | 4900 | 0.9581 | 0.8542 |
| 1.4117 | 277.76 | 5000 | 0.9517 | 0.8542 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.11.0
|
lgris/wav2vec2-xls-r-300m-gn-cv8-4 | 7c40351e4ce4dfdd360a7c062c359e409338133f | 2022-03-24T11:54:00.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gn",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lgris | null | lgris/wav2vec2-xls-r-300m-gn-cv8-4 | 2 | null | transformers | 24,394 | ---
language:
- gn
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- gn
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-300m-gn-cv8-4
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: gn
metrics:
- name: Test WER
type: wer
value: 68.45
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-gn-cv8-4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5805
- Wer: 0.7545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 13000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 9.2216 | 16.65 | 300 | 3.2771 | 1.0 |
| 3.1804 | 33.32 | 600 | 2.2869 | 1.0 |
| 1.5856 | 49.97 | 900 | 0.9573 | 0.8772 |
| 1.0299 | 66.65 | 1200 | 0.9044 | 0.8082 |
| 0.8916 | 83.32 | 1500 | 0.9478 | 0.8056 |
| 0.8451 | 99.97 | 1800 | 0.8814 | 0.8107 |
| 0.7649 | 116.65 | 2100 | 0.9897 | 0.7826 |
| 0.7185 | 133.32 | 2400 | 0.9988 | 0.7621 |
| 0.6595 | 149.97 | 2700 | 1.0607 | 0.7749 |
| 0.6211 | 166.65 | 3000 | 1.1826 | 0.7877 |
| 0.59 | 183.32 | 3300 | 1.1060 | 0.7826 |
| 0.5383 | 199.97 | 3600 | 1.1826 | 0.7852 |
| 0.5205 | 216.65 | 3900 | 1.2148 | 0.8261 |
| 0.4786 | 233.32 | 4200 | 1.2710 | 0.7928 |
| 0.4482 | 249.97 | 4500 | 1.1943 | 0.7980 |
| 0.4149 | 266.65 | 4800 | 1.2449 | 0.8031 |
| 0.3904 | 283.32 | 5100 | 1.3100 | 0.7928 |
| 0.3619 | 299.97 | 5400 | 1.3125 | 0.7596 |
| 0.3496 | 316.65 | 5700 | 1.3699 | 0.7877 |
| 0.3277 | 333.32 | 6000 | 1.4344 | 0.8031 |
| 0.2958 | 349.97 | 6300 | 1.4093 | 0.7980 |
| 0.2883 | 366.65 | 6600 | 1.3296 | 0.7570 |
| 0.2598 | 383.32 | 6900 | 1.4026 | 0.7980 |
| 0.2564 | 399.97 | 7200 | 1.4847 | 0.8031 |
| 0.2408 | 416.65 | 7500 | 1.4896 | 0.8107 |
| 0.2266 | 433.32 | 7800 | 1.4232 | 0.7698 |
| 0.224 | 449.97 | 8100 | 1.5560 | 0.7903 |
| 0.2038 | 466.65 | 8400 | 1.5355 | 0.7724 |
| 0.1948 | 483.32 | 8700 | 1.4624 | 0.7621 |
| 0.1995 | 499.97 | 9000 | 1.5808 | 0.7724 |
| 0.1864 | 516.65 | 9300 | 1.5653 | 0.7698 |
| 0.18 | 533.32 | 9600 | 1.4868 | 0.7494 |
| 0.1689 | 549.97 | 9900 | 1.5379 | 0.7749 |
| 0.1624 | 566.65 | 10200 | 1.5936 | 0.7749 |
| 0.1537 | 583.32 | 10500 | 1.6436 | 0.7801 |
| 0.1455 | 599.97 | 10800 | 1.6401 | 0.7673 |
| 0.1437 | 616.65 | 11100 | 1.6069 | 0.7673 |
| 0.1452 | 633.32 | 11400 | 1.6041 | 0.7519 |
| 0.139 | 649.97 | 11700 | 1.5758 | 0.7545 |
| 0.1299 | 666.65 | 12000 | 1.5559 | 0.7545 |
| 0.127 | 683.32 | 12300 | 1.5776 | 0.7596 |
| 0.1264 | 699.97 | 12600 | 1.5790 | 0.7519 |
| 0.1209 | 716.65 | 12900 | 1.5805 | 0.7545 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
lgris/wav2vec2-xls-r-pt-cv7-from-bp400h | 93243ff5c3db076766c6b2ef28130ff81263b9f4 | 2022-03-23T18:34:00.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lgris | null | lgris/wav2vec2-xls-r-pt-cv7-from-bp400h | 2 | null | transformers | 24,395 | ---
language:
- pt
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- pt
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
license: apache-2.0
model-index:
- name: wav2vec2-xls-r-pt-cv7-from-bp400h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: pt
metrics:
- name: Test WER
type: wer
value: 12.13
- name: Test CER
type: cer
value: 3.68
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 28.23
- name: Test CER
type: cer
value: 12.58
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pt
metrics:
- name: Test WER
type: wer
value: 26.58
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: pt
metrics:
- name: Test WER
type: wer
value: 26.86
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-pt-cv7-from-bp400h
This model is a fine-tuned version of [lgris/bp_400h_xlsr2_300M](https://huggingface.co/lgris/bp_400h_xlsr2_300M) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1535
- Wer: 0.1254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4991 | 0.13 | 100 | 0.1774 | 0.1464 |
| 0.4655 | 0.26 | 200 | 0.1884 | 0.1568 |
| 0.4689 | 0.39 | 300 | 0.2282 | 0.1672 |
| 0.4662 | 0.52 | 400 | 0.1997 | 0.1584 |
| 0.4592 | 0.65 | 500 | 0.1989 | 0.1663 |
| 0.4533 | 0.78 | 600 | 0.2004 | 0.1698 |
| 0.4391 | 0.91 | 700 | 0.1888 | 0.1642 |
| 0.4655 | 1.04 | 800 | 0.1921 | 0.1624 |
| 0.4138 | 1.17 | 900 | 0.1950 | 0.1602 |
| 0.374 | 1.3 | 1000 | 0.2077 | 0.1658 |
| 0.4064 | 1.43 | 1100 | 0.1945 | 0.1596 |
| 0.3922 | 1.56 | 1200 | 0.2069 | 0.1665 |
| 0.4226 | 1.69 | 1300 | 0.1962 | 0.1573 |
| 0.3974 | 1.82 | 1400 | 0.1919 | 0.1553 |
| 0.3631 | 1.95 | 1500 | 0.1854 | 0.1573 |
| 0.3797 | 2.08 | 1600 | 0.1902 | 0.1550 |
| 0.3287 | 2.21 | 1700 | 0.1926 | 0.1598 |
| 0.3568 | 2.34 | 1800 | 0.1888 | 0.1534 |
| 0.3415 | 2.47 | 1900 | 0.1834 | 0.1502 |
| 0.3545 | 2.6 | 2000 | 0.1906 | 0.1560 |
| 0.3344 | 2.73 | 2100 | 0.1804 | 0.1524 |
| 0.3308 | 2.86 | 2200 | 0.1741 | 0.1485 |
| 0.344 | 2.99 | 2300 | 0.1787 | 0.1455 |
| 0.309 | 3.12 | 2400 | 0.1773 | 0.1448 |
| 0.312 | 3.25 | 2500 | 0.1738 | 0.1440 |
| 0.3066 | 3.38 | 2600 | 0.1727 | 0.1417 |
| 0.2999 | 3.51 | 2700 | 0.1692 | 0.1436 |
| 0.2985 | 3.64 | 2800 | 0.1732 | 0.1430 |
| 0.3058 | 3.77 | 2900 | 0.1754 | 0.1402 |
| 0.2943 | 3.9 | 3000 | 0.1691 | 0.1379 |
| 0.2813 | 4.03 | 3100 | 0.1754 | 0.1376 |
| 0.2733 | 4.16 | 3200 | 0.1639 | 0.1363 |
| 0.2592 | 4.29 | 3300 | 0.1675 | 0.1349 |
| 0.2697 | 4.42 | 3400 | 0.1618 | 0.1360 |
| 0.2538 | 4.55 | 3500 | 0.1658 | 0.1348 |
| 0.2746 | 4.67 | 3600 | 0.1674 | 0.1325 |
| 0.2655 | 4.8 | 3700 | 0.1655 | 0.1319 |
| 0.2745 | 4.93 | 3800 | 0.1665 | 0.1316 |
| 0.2617 | 5.06 | 3900 | 0.1600 | 0.1311 |
| 0.2674 | 5.19 | 4000 | 0.1623 | 0.1311 |
| 0.237 | 5.32 | 4100 | 0.1591 | 0.1315 |
| 0.2669 | 5.45 | 4200 | 0.1584 | 0.1295 |
| 0.2476 | 5.58 | 4300 | 0.1572 | 0.1285 |
| 0.2445 | 5.71 | 4400 | 0.1580 | 0.1271 |
| 0.2207 | 5.84 | 4500 | 0.1567 | 0.1269 |
| 0.2289 | 5.97 | 4600 | 0.1536 | 0.1260 |
| 0.2438 | 6.1 | 4700 | 0.1530 | 0.1260 |
| 0.227 | 6.23 | 4800 | 0.1544 | 0.1249 |
| 0.2256 | 6.36 | 4900 | 0.1543 | 0.1254 |
| 0.2184 | 6.49 | 5000 | 0.1535 | 0.1254 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
liaad/srl-pt_xlmr-large | 233f898a561492b01c4f2543b40a383bb6c2dfcd | 2021-09-22T08:56:37.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"multilingual",
"pt",
"dataset:PropBank.Br",
"arxiv:2101.01213",
"transformers",
"xlm-roberta-large",
"semantic role labeling",
"finetuned",
"license:apache-2.0"
] | feature-extraction | false | liaad | null | liaad/srl-pt_xlmr-large | 2 | 1 | transformers | 24,396 | ---
language:
- multilingual
- pt
tags:
- xlm-roberta-large
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
metrics:
- F1 Measure
---
# XLM-R large fine-tuned on Portuguese semantic role labeling
## Model description
This model is the [`xlm-roberta-large`](https://huggingface.co/xlm-roberta-large) fine-tuned on Portuguese semantic role labeling data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-pt_xlmr-large")
model = AutoModel.from_pretrained("liaad/srl-pt_xlmr-large")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
## Training procedure
The model was trained on the PropBank.Br datasets, using 10-fold Cross-Validation. The 10 resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
liandarizkia/bert-id-ner | bb8bfe290e8569a87499487bc1738a872c6a4e5f | 2021-08-03T12:50:13.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | liandarizkia | null | liandarizkia/bert-id-ner | 2 | null | transformers | 24,397 | Ner dataset sourced from https://huggingface.co/datasets/id_nergrit_corpus
This model is built by fine-tuning BERT Transformers with an accuracy gain of 92.61% with an F1-Score value of 74.80%
|
liangtaiwan/t5-v1_1-lm100k-xl | 1fda3ec66968fa3efd008f3a7c3f80df901683a3 | 2021-10-25T13:33:57.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | liangtaiwan | null | liangtaiwan/t5-v1_1-lm100k-xl | 2 | null | transformers | 24,398 | Entry not found |
liangtaiwan/t5-v1_1-lm100k-xxl | 912665b3687460549a88dfa275d29c0c7f7bf964 | 2021-10-25T17:26:40.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | liangtaiwan | null | liangtaiwan/t5-v1_1-lm100k-xxl | 2 | null | transformers | 24,399 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.