modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
alireza7/ARMAN-MSR-persian-base-parsinlu-sentiment-movie | e97b744f510fdbd98184591b34232e24d78e57b7 | 2021-09-29T19:15:47.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-MSR-persian-base-parsinlu-sentiment-movie | 1 | null | transformers | 28,600 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-100-persian-base-tebyan | 526dcd841fb148f38a3822dc41f3fea7422bd40a | 2021-09-29T19:22:16.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-100-persian-base-tebyan | 1 | null | transformers | 28,601 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-100-persian-base-voa-title | 36351e617ac6f81edd07fed3c6c3bd038f130d72 | 2021-09-29T19:22:22.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-100-persian-base-voa-title | 1 | null | transformers | 28,602 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-80-persian-base-parsinlu-multiple-choice | 08621ac29067639431e2c25be09f8b56de2870ab | 2021-09-29T19:22:50.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-80-persian-base-parsinlu-multiple-choice | 1 | null | transformers | 28,603 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
am-shb/xlm-roberta-base-pretrained | 942e5ca123b9a41b35689ed823d882ad656caa4d | 2022-02-09T15:53:08.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | am-shb | null | am-shb/xlm-roberta-base-pretrained | 1 | null | transformers | 28,604 | ---
tags:
- generated_from_trainer
model-index:
- name: roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 16
- seed: 1337
- gradient_accumulation_steps: 4
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.11.2
- Pytorch 1.10.0
- Datasets 1.8.0
- Tokenizers 0.10.3
|
aman21/DialoGPT-medium-Morty | 5ea203929fc7b93593f3bc6de41aea3334133946 | 2021-09-03T10:38:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | aman21 | null | aman21/DialoGPT-medium-Morty | 1 | null | transformers | 28,605 | ---
- conversation
--- |
ami-wav2vec2/ami-dummy-nithin | fd2ef3becd32ae2f3f53020e03f0d8eb912e31bb | 2021-10-14T07:47:52.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/ami-dummy-nithin | 1 | null | transformers | 28,606 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: ami-dummy-nithin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ami-dummy-nithin
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 25.1441
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| No log | 1.24 | 15 | 85.3333 | 1.0 |
| No log | 2.48 | 30 | 43.9463 | 1.0 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/ami-dummy-vumichien | 966f5299b7be5136bc7888644811ac72c4f8ba7f | 2021-10-22T05:35:13.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/ami-dummy-vumichien | 1 | null | transformers | 28,607 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: ami-dummy-vumichien
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ami-dummy-vumichien
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 90.3471
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.13.3
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-base-ami_multi-nithin2 | dd363a55338b00ef42ff3ae9cfa98cad5a9fc74c | 2021-10-17T05:29:27.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-base-ami_multi-nithin2 | 1 | null | transformers | 28,608 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-base-ami_multi-nithin2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ami_multi-nithin2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3235
- Wer: 0.4971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.7645 | 1.07 | 2500 | 3.0172 | 0.9979 |
| 2.0313 | 2.13 | 5000 | 2.0832 | 0.5786 |
| 1.9158 | 3.2 | 7500 | 1.9347 | 0.5201 |
| 1.8579 | 4.27 | 10000 | 2.1931 | 0.4882 |
| 1.8222 | 5.33 | 12500 | 2.1480 | 0.4706 |
| 1.7784 | 6.4 | 15000 | 2.0791 | 0.4638 |
| 1.7736 | 7.47 | 17500 | 2.0789 | 0.4590 |
| 1.7471 | 8.53 | 20000 | 2.1862 | 0.4533 |
| 1.7264 | 9.6 | 22500 | 2.0762 | 0.4543 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-base-ami_multi-nithin4 | 426d7956576c225607db4f0ab3b4283c8d1069e9 | 2021-10-28T05:25:06.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-base-ami_multi-nithin4 | 1 | null | transformers | 28,609 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-base-ami_multi-nithin4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ami_multi-nithin4
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0790
- Wer: 0.4478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.8893 | 1.07 | 2500 | 3.7944 | 1.0000 |
| 2.0331 | 2.13 | 5000 | 2.0323 | 0.5840 |
| 1.9009 | 3.2 | 7500 | 1.8876 | 0.5173 |
| 1.8367 | 4.27 | 10000 | 2.1239 | 0.4847 |
| 1.8007 | 5.33 | 12500 | 1.9126 | 0.4684 |
| 1.743 | 6.4 | 15000 | 2.0750 | 0.4570 |
| 1.7329 | 7.47 | 17500 | 1.9226 | 0.4460 |
| 1.7013 | 8.53 | 20000 | 1.9677 | 0.4392 |
| 1.6674 | 9.6 | 22500 | 1.9064 | 0.4360 |
| 1.6568 | 10.67 | 25000 | 1.8144 | 0.4304 |
| 1.6507 | 11.73 | 27500 | 1.8881 | 0.4248 |
| 1.5973 | 12.8 | 30000 | 1.7907 | 0.4267 |
| 1.6316 | 13.87 | 32500 | 1.7567 | 0.4207 |
| 1.6053 | 14.93 | 35000 | 1.7838 | 0.4192 |
| 1.599 | 16.0 | 37500 | 1.8054 | 0.4181 |
| 1.5629 | 17.06 | 40000 | 1.7739 | 0.4135 |
| 1.6124 | 18.13 | 42500 | 2.0690 | 0.4138 |
| 1.5623 | 19.2 | 45000 | 1.9308 | 0.4144 |
| 1.5524 | 20.26 | 47500 | 1.8130 | 0.4121 |
| 1.5654 | 21.33 | 50000 | 1.8344 | 0.4131 |
| 1.5552 | 22.4 | 52500 | 1.9365 | 0.4116 |
| 1.5357 | 23.46 | 55000 | 1.9330 | 0.4114 |
| 1.534 | 24.53 | 57500 | 1.8155 | 0.4079 |
| 1.5333 | 25.6 | 60000 | 1.7895 | 0.4069 |
| 1.5315 | 26.66 | 62500 | 1.7903 | 0.4082 |
| 1.5174 | 27.73 | 65000 | 1.8356 | 0.4080 |
| 1.5209 | 28.8 | 67500 | 1.8147 | 0.4077 |
| 1.5696 | 29.86 | 70000 | 1.8219 | 0.4076 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-base-ami_multi-nithin7 | 9eb9265b7cb5989561935117c43b160001468a63 | 2021-11-12T04:49:01.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-base-ami_multi-nithin7 | 1 | null | transformers | 28,610 | Entry not found |
ami-wav2vec2/wav2vec2-base-ami_multi-tune_0.00005_4 | 6d85fc4c769b54839275a5d71f117fc84e25fd41 | 2021-11-08T10:41:37.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-base-ami_multi-tune_0.00005_4 | 1 | null | transformers | 28,611 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-base-tune_0.00005_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-tune_0.00005_4
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5405
- Wer: 0.4744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.4032 | 0.86 | 1000 | 2.1379 | 0.8193 |
| 1.4611 | 1.72 | 2000 | 1.4984 | 0.5155 |
| 1.315 | 2.59 | 3000 | 1.4401 | 0.4707 |
| 1.2574 | 3.45 | 4000 | 1.3587 | 0.4559 |
| 1.1924 | 4.31 | 5000 | 1.3372 | 0.4450 |
| 1.1313 | 5.17 | 6000 | 1.3187 | 0.4351 |
| 1.0911 | 6.03 | 7000 | 1.3446 | 0.4354 |
| 1.0753 | 6.9 | 8000 | 1.3450 | 0.4396 |
| 1.0504 | 7.76 | 9000 | 1.3342 | 0.4378 |
| 1.0249 | 8.62 | 10000 | 1.3442 | 0.4335 |
| 1.0327 | 9.48 | 11000 | 1.3412 | 0.4293 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-base-ami_multi-tune_0.00005_8 | 27bd74261f63550935e22dd9fba87c5300be2356 | 2021-11-08T10:36:08.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-base-ami_multi-tune_0.00005_8 | 1 | null | transformers | 28,612 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-base-tune_0.00005_8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-tune_0.00005_8
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5701
- Wer: 0.4927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.8189 | 1.72 | 1000 | 1.7820 | 0.6588 |
| 1.3459 | 3.45 | 2000 | 1.4136 | 0.4750 |
| 1.2262 | 5.17 | 3000 | 1.3611 | 0.4546 |
| 1.1661 | 6.9 | 4000 | 1.3832 | 0.4610 |
| 1.122 | 8.62 | 5000 | 1.3735 | 0.4485 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-base-ami_multi-tune_0.0001_4 | 569e9de6307c2df4fecc24ebc702e3eb502d750e | 2021-11-08T10:55:19.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-base-ami_multi-tune_0.0001_4 | 1 | null | transformers | 28,613 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-base-tune_0.0001_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-tune_0.0001_4
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5284
- Wer: 0.4735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.971 | 0.86 | 1000 | 1.8257 | 0.6751 |
| 1.4062 | 1.72 | 2000 | 1.4239 | 0.4815 |
| 1.2763 | 2.59 | 3000 | 1.3776 | 0.4461 |
| 1.2106 | 3.45 | 4000 | 1.3215 | 0.4428 |
| 1.1394 | 4.31 | 5000 | 1.3168 | 0.4343 |
| 1.0651 | 5.17 | 6000 | 1.2975 | 0.4258 |
| 1.0268 | 6.03 | 7000 | 1.3086 | 0.4242 |
| 1.0056 | 6.9 | 8000 | 1.3209 | 0.4295 |
| 0.9655 | 7.76 | 9000 | 1.3159 | 0.4284 |
| 0.9283 | 8.62 | 10000 | 1.3286 | 0.4259 |
| 0.9244 | 9.48 | 11000 | 1.3411 | 0.4243 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-base-ami_multi-tune_0.0001_8 | 15c5312bcfcf14833e2e8d11b64c635380545292 | 2021-11-08T10:57:31.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-base-ami_multi-tune_0.0001_8 | 1 | null | transformers | 28,614 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-base-tune_0.0001_8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-tune_0.0001_8
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5750
- Wer: 0.4813
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.5458 | 1.72 | 1000 | 1.5351 | 0.5397 |
| 1.2552 | 3.45 | 2000 | 1.3582 | 0.4540 |
| 1.1246 | 5.17 | 3000 | 1.3412 | 0.4378 |
| 1.0614 | 6.9 | 4000 | 1.3356 | 0.4344 |
| 1.0007 | 8.62 | 5000 | 1.3410 | 0.4352 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-base-ami_multi-tune_0.0005_4 | 867d13382998f2aa2623852107d3157666c43a70 | 2021-11-08T10:50:42.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-base-ami_multi-tune_0.0005_4 | 1 | null | transformers | 28,615 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-base-tune_0.0005_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-tune_0.0005_4
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9286
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---:|
| 3.0115 | 0.86 | 1000 | 3.8103 | 1.0 |
| 2.9818 | 1.72 | 2000 | 3.6096 | 1.0 |
| 2.9991 | 2.59 | 3000 | 3.6555 | 1.0 |
| 2.9914 | 3.45 | 4000 | 3.6829 | 1.0 |
| 2.9958 | 4.31 | 5000 | 3.5873 | 1.0 |
| 2.9921 | 5.17 | 6000 | 3.5026 | 1.0 |
| 3.0256 | 6.03 | 7000 | 3.5531 | 1.0 |
| 2.9892 | 6.9 | 8000 | 3.6803 | 1.0 |
| 2.9994 | 7.76 | 9000 | 3.5720 | 1.0 |
| 2.9796 | 8.62 | 10000 | 3.6583 | 1.0 |
| 2.9837 | 9.48 | 11000 | 3.6397 | 1.0 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-base-ami_multi-tune_0.0005_8 | 4b4cde96deb56d7c7c2e37e55e4e111b204dc7fe | 2021-11-08T10:52:56.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-base-ami_multi-tune_0.0005_8 | 1 | null | transformers | 28,616 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-base-tune_0.0005_8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-tune_0.0005_8
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5092
- Wer: 0.4821
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.5221 | 1.72 | 1000 | 1.6180 | 0.5266 |
| 1.3259 | 3.45 | 2000 | 1.4400 | 0.4921 |
| 1.1732 | 5.17 | 3000 | 1.3968 | 0.4669 |
| 1.0888 | 6.9 | 4000 | 1.3652 | 0.4569 |
| 0.9659 | 8.62 | 5000 | 1.3176 | 0.4332 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-large-lv60-ami_multi-tune_0.00005_16 | a57476cdfb0c8997cb7208bf9bdd30b656bf247e | 2021-11-18T19:20:43.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-large-lv60-ami_multi-tune_0.00005_16 | 1 | null | transformers | 28,617 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-large-lv60-ami_multi-tune_0.00005_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-lv60-ami_multi-tune_0.00005_16
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5257
- Wer: 0.4840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.7983 | 1.72 | 1000 | 2.6819 | 0.9987 |
| 1.4 | 3.45 | 2000 | 1.3997 | 0.4810 |
| 1.2656 | 5.17 | 3000 | 1.3366 | 0.4491 |
| 1.2027 | 6.9 | 4000 | 1.3150 | 0.4385 |
| 1.1618 | 8.62 | 5000 | 1.3018 | 0.4348 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
amitesh863/fin_embeds | c16cffe910da71896afaf86d9937907cb26f2ea1 | 2021-09-30T14:42:20.000Z | [
"pytorch",
"transformers"
] | null | false | amitesh863 | null | amitesh863/fin_embeds | 1 | null | transformers | 28,618 | Entry not found |
anas-awadalla/roberta-base-few-shot-k-128-finetuned-squad-seed-42 | d289393de4836514f393f1a117cfc57c7cb9f662 | 2022-02-21T21:29:23.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-base-few-shot-k-128-finetuned-squad-seed-42 | 1 | null | transformers | 28,619 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-few-shot-k-128-finetuned-squad-seed-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-128-finetuned-squad-seed-42
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
{'exact_match': 39.04446546830653, 'f1': 49.90230650794353}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
anas-awadalla/spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-42 | 3959b64f7b1e21997a3969aee25cb1f179145012 | 2022-02-21T22:44:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-42 | 1 | null | transformers | 28,620 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-42
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
{'exact_match': 64.02081362346263, 'f1': 75.36439229517165}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
andi611/distilbert-base-uncased-squad | 26b42bf3a50656230e75f135cd0210e3f6abc745 | 2021-07-15T00:45:07.000Z | [
"pytorch",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | andi611 | null | andi611/distilbert-base-uncased-squad | 1 | null | transformers | 28,621 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model_index:
- name: distilbert-base-uncased-qa
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: squad
type: squad
args: plain_text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-qa
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
andi611/distilbert-base-uncased-squad2-with-ner-with-neg-with-repeat | c5bfc64309c183461bee935e0ea9d4ee94a03edf | 2021-08-14T13:58:51.000Z | [
"pytorch",
"distilbert",
"question-answering",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | question-answering | false | andi611 | null | andi611/distilbert-base-uncased-squad2-with-ner-with-neg-with-repeat | 1 | null | transformers | 28,622 | ---
tags:
- generated_from_trainer
datasets:
- conll2003
model_index:
- name: distilbert-base-uncased-squad2-with-ner-with-neg-with-repeat
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: conll2003
type: conll2003
args: conll2003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad2-with-ner-with-neg-with-repeat
This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
ange/DialoGPT-medium-Monke | 1fb0a62810680d983d283fee75a6641150c0e88c | 2022-01-03T15:15:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ange | null | ange/DialoGPT-medium-Monke | 1 | null | transformers | 28,623 | ---
tags:
- conversational
---
#Monke Messenger DialoGPT Model |
ankimt01/DialoGPT-small-anch | 2e8b86ff8a4f28c25753f289b9f02170ea602c86 | 2022-02-16T17:40:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ankimt01 | null | ankimt01/DialoGPT-small-anch | 1 | null | transformers | 28,624 | ---
tags:
- conversational
---
# myself DialoGPT Model |
ankitkupadhyay/dummy-model | c27c48af32c750682f3f9a5a22e850367845d6e6 | 2022-02-04T17:47:15.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ankitkupadhyay | null | ankitkupadhyay/dummy-model | 1 | null | transformers | 28,625 | Entry not found |
anondo/test_anon | 169cd7687c51fe0d3ce05a73b57cc0125fa1a52d | 2022-02-09T11:04:14.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | anondo | null | anondo/test_anon | 1 | null | transformers | 28,626 | Entry not found |
anton-l/wav2vec2-base-960h | cd5d5f83554cf69c7df59deb1fb3e164dec8650a | 2021-07-05T19:38:21.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"transformers"
] | null | false | anton-l | null | anton-l/wav2vec2-base-960h | 1 | null | transformers | 28,627 | Entry not found |
anton-l/wav2vec2-large-xlsr-53-mongolian | 52113105371bd2df959d366f524103ae6d7ca09d | 2021-07-05T20:13:41.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"mn",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anton-l | null | anton-l/wav2vec2-large-xlsr-53-mongolian | 1 | null | transformers | 28,628 | ---
language: mn
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Mongolian XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice mn
type: common_voice
args: mn
metrics:
- name: Test WER
type: wer
value: 38.53
---
# Wav2Vec2-Large-XLSR-53-Mongolian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "mn", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-mongolian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Mongolian test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/mn.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-mongolian")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/mn/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/mn/clips/"
def clean_sentence(sent):
sent = sent.lower()
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 38.53 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
anton-l/wav2vec2-xls-r-common_voice-tr-ft | 40844e2b1a433b29a26f7a1396fe6324c0549459 | 2022-01-31T09:48:53.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"transformers",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anton-l | null | anton-l/wav2vec2-xls-r-common_voice-tr-ft | 1 | null | transformers | 28,629 | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-common_voice-tr-ft-500sh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-common_voice-tr-ft-500sh
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5794
- Wer: 0.4009
- Cer: 0.1032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 0.5288 | 17.0 | 500 | 0.5099 | 0.5426 | 0.1432 |
| 0.2967 | 34.0 | 1000 | 0.5421 | 0.4746 | 0.1256 |
| 0.2447 | 51.0 | 1500 | 0.5347 | 0.4831 | 0.1267 |
| 0.122 | 68.01 | 2000 | 0.5854 | 0.4479 | 0.1161 |
| 0.1035 | 86.0 | 2500 | 0.5597 | 0.4457 | 0.1166 |
| 0.081 | 103.0 | 3000 | 0.5748 | 0.4250 | 0.1144 |
| 0.0849 | 120.0 | 3500 | 0.5598 | 0.4337 | 0.1145 |
| 0.0542 | 137.01 | 4000 | 0.5687 | 0.4223 | 0.1097 |
| 0.0318 | 155.0 | 4500 | 0.5904 | 0.4057 | 0.1052 |
| 0.0106 | 172.0 | 5000 | 0.5794 | 0.4009 | 0.1032 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
anuragshas/wav2vec2-large-xls-r-300m-hi | fdd9879255249a48cdfeb928b1265f103e583b41 | 2022-01-20T20:38:42.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-large-xls-r-300m-hi | 1 | null | transformers | 28,630 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4156
- Wer: 0.7181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.7703 | 2.72 | 400 | 2.2274 | 0.9259 |
| 0.6515 | 5.44 | 800 | 1.5812 | 0.7581 |
| 0.339 | 8.16 | 1200 | 2.0590 | 0.7825 |
| 0.2262 | 10.88 | 1600 | 2.0324 | 0.7603 |
| 0.1665 | 13.6 | 2000 | 2.1396 | 0.7481 |
| 0.1311 | 16.33 | 2400 | 2.2090 | 0.7379 |
| 0.1079 | 19.05 | 2800 | 2.3907 | 0.7612 |
| 0.0927 | 21.77 | 3200 | 2.5294 | 0.7478 |
| 0.0748 | 24.49 | 3600 | 2.5024 | 0.7452 |
| 0.0644 | 27.21 | 4000 | 2.4715 | 0.7307 |
| 0.0569 | 29.93 | 4400 | 2.4156 | 0.7181 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anuragshas/wav2vec2-large-xls-r-300m-ur | 68b9a689b9b3bb98f4e53538c307fc734bddb883 | 2022-01-21T04:32:18.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-large-xls-r-300m-ur | 1 | null | transformers | 28,631 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-ur
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ur
This model is a fine-tuned version of [anuragshas/wav2vec2-large-xls-r-300m-ur](https://huggingface.co/anuragshas/wav2vec2-large-xls-r-300m-ur) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0508
- Wer: 0.7328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.12
- num_epochs: 240
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.0719 | 66.67 | 400 | 1.8510 | 0.7432 |
| 0.0284 | 133.33 | 800 | 2.0088 | 0.7415 |
| 0.014 | 200.0 | 1200 | 2.0508 | 0.7328 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anuragshas/wav2vec2-large-xlsr-53-hsb | 2247e1055eeba8978514d3858ca53be44a8f2f3a | 2021-07-05T20:57:25.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"hsb",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-large-xlsr-53-hsb | 1 | null | transformers | 28,632 | ---
language: hsb
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Anurag Singh XLSR Wav2Vec2 Large 53 Sorbian, Upper
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice hsb
type: common_voice
args: hsb
metrics:
- name: Test WER
type: wer
value: 65.05
---
# Wav2Vec2-Large-XLSR-53-Sorbian, Upper
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Sorbian, Upper using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hsb", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-hsb")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-hsb")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Sorbian, Upper test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hsb", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-hsb")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-hsb")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\”\„\–\…\«\»]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 65.05 %
## Training
The Common Voice `train` and `validation` datasets were used for training. |
anuragshas/wav2vec2-large-xlsr-53-odia | 355811285d7023e86b669cd881e63a5f0c24ba0f | 2021-07-05T21:08:48.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"or",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-large-xlsr-53-odia | 1 | null | transformers | 28,633 | ---
language: or
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Anurag Singh XLSR Wav2Vec2 Large 53 Odia
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice or
type: common_voice
args: or
metrics:
- name: Test WER
type: wer
value: 57.10
---
# Wav2Vec2-Large-XLSR-53-Odia
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Odia using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "or", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-odia")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-odia")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Odia test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "or", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-odia")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-odia")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 57.10 %
## Training
The Common Voice `train` and `validation` datasets were used for training. |
anuragshas/wav2vec2-xlsr-53-pa-in | 1aa6b15be1b2082ac86c63c097411045f841b4ac | 2021-07-05T21:47:48.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"pa-IN",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-xlsr-53-pa-in | 1 | null | transformers | 28,634 | ---
language: pa-IN
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Anurag Singh XLSR Wav2Vec2 Large 53 Punjabi
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice pa-IN
type: common_voice
args: pa-IN
metrics:
- name: Test WER
type: wer
value: 58.05
---
# Wav2Vec2-Large-XLSR-53-Punjabi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Punjabi using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pa-IN", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-xlsr-53-pa-in")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-xlsr-53-pa-in")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Punjabi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pa-IN", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-xlsr-53-pa-in")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-xlsr-53-pa-in")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\।\’\'\…]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 58.05 %
## Training
The Common Voice `train` and `validation` datasets were used for training. |
anuragshas/wav2vec2-xlsr-53-tamil | b1843f913ddd58d76011ebdbf3f28733947351af | 2021-07-05T21:55:09.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ta",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-xlsr-53-tamil | 1 | null | transformers | 28,635 | ---
language: ta
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Anurag Singh XLSR Wav2Vec2 Large 53 Tamil
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ta
type: common_voice
args: ta
metrics:
- name: Test WER
type: wer
value: 71.87
---
# Wav2Vec2-Large-XLSR-53-Tamil
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ta", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-xlsr-53-tamil")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-xlsr-53-tamil")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Tamil test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ta", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-xlsr-53-tamil")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-xlsr-53-tamil")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\।\’\']'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 71.87 %
## Training
The Common Voice `train` and `validation` datasets were used for training. |
anushakamath/wav2vec2-xls-r-300m-punjabi-in | ea6e1c0c225b5550d21c37d8bb93684f68301b6c | 2022-02-08T16:59:54.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | anushakamath | null | anushakamath/wav2vec2-xls-r-300m-punjabi-in | 1 | null | transformers | 28,636 | Entry not found |
aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616_squad2_covid-qna | 4f28439f4867d4fc824e2b653e67d0b9bbaa90c2 | 2021-05-18T23:45:57.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616_squad2_covid-qna | 1 | null | transformers | 28,637 | Entry not found |
aodiniz/bert_uncased_L-10_H-512_A-8_squad2_covid-qna | 7cb3b1b70734722ceff7f23f18ad924ad095b22f | 2021-05-18T23:47:06.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-10_H-512_A-8_squad2_covid-qna | 1 | null | transformers | 28,638 | Entry not found |
aodiniz/bert_uncased_L-2_H-512_A-8_cord19-200616_squad2 | 60a9bca6ff41c96e9bf6a9b9ead63c0bfb2a3842 | 2021-05-18T23:49:22.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-2_H-512_A-8_cord19-200616_squad2 | 1 | null | transformers | 28,639 | Entry not found |
aodiniz/bert_uncased_L-4_H-256_A-4_cord19-200616_squad2 | 401c629c9d1d448c5f92d308723d9dfaa6955f63 | 2021-05-18T23:51:46.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-4_H-256_A-4_cord19-200616_squad2 | 1 | null | transformers | 28,640 | Entry not found |
aodiniz/bert_uncased_L-4_H-512_A-8_cord19-200616 | fd80c4df83c26e4d026ce88fc0f9ced740ef7810 | 2021-05-18T23:53:15.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | aodiniz | null | aodiniz/bert_uncased_L-4_H-512_A-8_cord19-200616 | 1 | null | transformers | 28,641 | Entry not found |
aodiniz/bert_uncased_L-6_H-128_A-2_cord19-200616 | f46c0227955b47f67b532ce34a64276d30035ec4 | 2021-05-18T23:58:53.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | aodiniz | null | aodiniz/bert_uncased_L-6_H-128_A-2_cord19-200616 | 1 | null | transformers | 28,642 | Entry not found |
aodiniz/bert_uncased_L-6_H-128_A-2_cord19-200616_squad2_covid-qna | 71958a25627dac4c32125638466b6d89caf224c8 | 2021-05-18T23:59:30.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-6_H-128_A-2_cord19-200616_squad2_covid-qna | 1 | null | transformers | 28,643 | Entry not found |
aodiniz/bert_uncased_L-6_H-128_A-2_squad2_covid-qna | c10728df8ccee3dd4c2b226203fc7fd950c12f7b | 2021-05-19T00:00:06.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-6_H-128_A-2_squad2_covid-qna | 1 | null | transformers | 28,644 | Entry not found |
aozorahime/my-new-model | 49177e8a1e3db05afd88a88316f2766fd2d1e3c4 | 2021-11-19T03:15:33.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | aozorahime | null | aozorahime/my-new-model | 1 | null | transformers | 28,645 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: my-new-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-new-model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
apeguero/wav2vec2-large-xls-r-300m-tr-colab-3 | 9f84e898b31551009359a0e0a6656bce17a08ca9 | 2021-11-23T02:27:27.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | apeguero | null | apeguero/wav2vec2-large-xls-r-300m-tr-colab-3 | 1 | null | transformers | 28,646 | Entry not found |
aplnestrella/Aladdin-Bot | da5b6e2fd852e1b71756d12befaf08116f074b13 | 2022-01-24T15:30:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | aplnestrella | null | aplnestrella/Aladdin-Bot | 1 | null | transformers | 28,647 | ---
tags:
- conversational
---
# Aladdin Bot |
arampacha/wav2vec2-xls-r-1b-hy-cv | b1960da0f376244f91860ce732c50ee8d7ff92f2 | 2022-03-24T11:51:56.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"hy-AM",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hy",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | arampacha | null | arampacha/wav2vec2-xls-r-1b-hy-cv | 1 | null | transformers | 28,648 | ---
language:
- hy-AM
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- hy
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-1b-hy-cv
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice hy-AM
args: hy-AM
metrics:
- type: wer
value: 0.2755659640905542
name: WER LM
- type: cer
value: 0.08659585230146687
name: CER LM
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HY-AM dataset.
It achieves the following results on the evaluation set:
- Loss: **0.4521**
- Wer: **0.5141**
- Cer: **0.1100**
- Wer+LM: **0.2756**
- Cer+LM: **0.0866**
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: tristage
- lr_scheduler_ratios: [0.1, 0.4, 0.5]
- training_steps: 1400
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 6.1298 | 19.87 | 100 | 3.1204 | 1.0 | 1.0 |
| 2.7269 | 39.87 | 200 | 0.6200 | 0.7592 | 0.1755 |
| 1.4643 | 59.87 | 300 | 0.4796 | 0.5921 | 0.1277 |
| 1.1242 | 79.87 | 400 | 0.4637 | 0.5359 | 0.1145 |
| 0.9592 | 99.87 | 500 | 0.4521 | 0.5141 | 0.1100 |
| 0.8704 | 119.87 | 600 | 0.4736 | 0.4914 | 0.1045 |
| 0.7908 | 139.87 | 700 | 0.5394 | 0.5250 | 0.1124 |
| 0.7049 | 159.87 | 800 | 0.4822 | 0.4754 | 0.0985 |
| 0.6299 | 179.87 | 900 | 0.4890 | 0.4809 | 0.1028 |
| 0.5832 | 199.87 | 1000 | 0.5233 | 0.4813 | 0.1028 |
| 0.5145 | 219.87 | 1100 | 0.5350 | 0.4781 | 0.0994 |
| 0.4604 | 239.87 | 1200 | 0.5223 | 0.4715 | 0.0984 |
| 0.4226 | 259.87 | 1300 | 0.5167 | 0.4625 | 0.0953 |
| 0.3946 | 279.87 | 1400 | 0.5248 | 0.4614 | 0.0950 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
arampacha/wav2vec2-xls-r-1b-ka | 8c9615d1d7bb8e3209377d7142b84a1f5dbcf8a9 | 2022-03-24T11:51:59.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ka",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | arampacha | null | arampacha/wav2vec2-xls-r-1b-ka | 1 | null | transformers | 28,649 |
---
language:
- ka
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-1b-ka
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice ka
args: ka
metrics:
- type: wer
value: 7.39778066580026
name: WER LM
- type: cer
value: 1.1882089427096434
name: CER LM
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ka
metrics:
- name: Test WER
type: wer
value: 22.61
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ka
metrics:
- name: Test WER
type: wer
value: 21.58
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-ka
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the /WORKSPACE/DATA/KA/NOIZY_STUDENT_2/ - KA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1022
- Wer: 0.1527
- Cer: 0.0221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.2839 | 6.45 | 400 | 0.2229 | 0.3609 | 0.0557 |
| 0.9775 | 12.9 | 800 | 0.1271 | 0.2202 | 0.0317 |
| 0.9045 | 19.35 | 1200 | 0.1268 | 0.2030 | 0.0294 |
| 0.8652 | 25.8 | 1600 | 0.1211 | 0.1940 | 0.0287 |
| 0.8505 | 32.26 | 2000 | 0.1192 | 0.1912 | 0.0276 |
| 0.8168 | 38.7 | 2400 | 0.1086 | 0.1763 | 0.0260 |
| 0.7737 | 45.16 | 2800 | 0.1098 | 0.1753 | 0.0256 |
| 0.744 | 51.61 | 3200 | 0.1054 | 0.1646 | 0.0239 |
| 0.7114 | 58.06 | 3600 | 0.1034 | 0.1573 | 0.0228 |
| 0.6773 | 64.51 | 4000 | 0.1022 | 0.1527 | 0.0221 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
arampacha/wav2vec2-xls-r-300m-ka | 43b259481ada489eedd53f5908c184d57f86f91b | 2022-02-07T16:50:37.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | arampacha | null | arampacha/wav2vec2-xls-r-300m-ka | 1 | null | transformers | 28,650 | Entry not found |
aristotletan/bart-large-finetuned-xsum | b084aae3ff6f0112158768c2a1a5d0d89afe0eb8 | 2021-07-22T01:45:40.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:wsj_markets",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | aristotletan | null | aristotletan/bart-large-finetuned-xsum | 1 | null | transformers | 28,651 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- wsj_markets
metrics:
- rouge
model_index:
- name: bart-large-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wsj_markets
type: wsj_markets
args: default
metric:
name: Rouge1
type: rouge
value: 15.3934
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-finetuned-xsum
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the wsj_markets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8497
- Rouge1: 15.3934
- Rouge2: 7.0378
- Rougel: 13.9522
- Rougelsum: 14.3541
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.0964 | 1.0 | 1735 | 0.9365 | 18.703 | 12.7539 | 18.1293 | 18.5397 | 20.0 |
| 0.95 | 2.0 | 3470 | 0.8871 | 19.5223 | 13.0938 | 18.9148 | 18.8363 | 20.0 |
| 0.8687 | 3.0 | 5205 | 0.8587 | 15.0915 | 7.142 | 13.6693 | 14.5975 | 20.0 |
| 0.7989 | 4.0 | 6940 | 0.8569 | 18.243 | 11.4495 | 17.4326 | 17.489 | 20.0 |
| 0.7493 | 5.0 | 8675 | 0.8497 | 15.3934 | 7.0378 | 13.9522 | 14.3541 | 20.0 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.10.0
- Tokenizers 0.10.3
|
arredondos/my_sentence_transformer | 1b664ac982886f2b4ba02b2010b16d44734dd917 | 2022-02-08T13:10:36.000Z | [
"pytorch",
"bert",
"feature-extraction",
"en",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"sentence-transformers",
"sentence-similarity",
"license:apache-2.0"
] | sentence-similarity | false | arredondos | null | arredondos/my_sentence_transformer | 1 | null | sentence-transformers | 28,652 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** | |
arvalinno/albert-base-v2-finetuned-squad | 48b1bdab1fe50337d9e2eb4e0480507b9e124638 | 2021-11-20T12:05:42.000Z | [
"pytorch",
"tensorboard",
"albert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | arvalinno | null | arvalinno/albert-base-v2-finetuned-squad | 1 | null | transformers | 28,653 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: albert-base-v2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-squad
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1893 | 1.0 | 3052 | 0.2808 |
| 0.1209 | 2.0 | 6104 | 0.2787 |
| 0.069 | 3.0 | 9156 | 0.3222 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
asad/DialoGPT-small-harryporter_bot | 9d8c624fb477561b4afb2dd382b6f6d863aaa1ef | 2021-08-30T20:03:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | asad | null | asad/DialoGPT-small-harryporter_bot | 1 | null | transformers | 28,654 | ---
tags:
- conversational
---
# Harry porter DialoGPT model |
asahi417/relbert-roberta-large-autoprompt | 906fae8581dcb3ab3c53e7de01baf8da450026e4 | 2021-07-05T13:44:37.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | asahi417 | null | asahi417/relbert-roberta-large-autoprompt | 1 | null | transformers | 28,655 | # RelBERT
RoBERTa finetuned on the contrastive loss for lexical relation. Please take a look [the official repository](https://github.com/asahi417/relbert).
|
asahi417/relbert-roberta-large-ptuning | 336eae9156d8c962a90b8047c6d23455cca8b78a | 2021-07-05T13:45:58.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | asahi417 | null | asahi417/relbert-roberta-large-ptuning | 1 | null | transformers | 28,656 | # RelBERT
RoBERTa finetuned on the contrastive loss for lexical relation. Please take a look [the official repository](https://github.com/asahi417/relbert).
|
tner/xlm-roberta-base-bionlp2004 | 18200eff294ece59a4894c346903336dccc92a0b | 2021-02-12T23:32:10.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-base-bionlp2004 | 1 | null | transformers | 28,657 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-bionlp2004")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-bionlp2004")
``` |
tner/xlm-roberta-base-uncased-bionlp2004 | 89f696e19c878cef1fd15d9b573dcebef10306ab | 2021-02-12T23:35:21.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-base-uncased-bionlp2004 | 1 | null | transformers | 28,658 |
# XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-bionlp2004")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-bionlp2004")
``` |
tner/xlm-roberta-base-uncased-conll2003 | 391abb9d03797a8190b182f12dd9669eff533458 | 2021-02-13T00:08:16.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-base-uncased-conll2003 | 1 | null | transformers | 28,659 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-conll2003")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-conll2003")
``` |
tner/xlm-roberta-base-uncased-fin | 2e0484b89059294ff8f294929349a4e327880da1 | 2021-02-12T23:47:27.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-base-uncased-fin | 1 | null | transformers | 28,660 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-fin")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-fin")
``` |
tner/xlm-roberta-base-uncased-wnut2017 | 6256e02178a6a23df64ad33e5228f5e82c7a7599 | 2021-02-12T23:48:34.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-base-uncased-wnut2017 | 1 | null | transformers | 28,661 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-wnut2017")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-wnut2017")
``` |
tner/xlm-roberta-base-wnut2017 | 246bce25ad65c4b94a613c35babda0e4871a5517 | 2021-02-13T00:10:57.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-base-wnut2017 | 1 | null | transformers | 28,662 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-wnut2017")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-wnut2017")
``` |
tner/xlm-roberta-large-bionlp2004 | 6afa746cbaf1edca1f40c5a2ce44020d20ac4289 | 2021-02-13T00:04:14.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-large-bionlp2004 | 1 | null | transformers | 28,663 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-bionlp2004")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-bionlp2004")
``` |
tner/xlm-roberta-large-conll2003 | 993cdb4505d73d8334d34c337fd4f431506a6f4c | 2021-02-13T00:11:10.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-large-conll2003 | 1 | null | transformers | 28,664 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-conll2003")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-conll2003")
``` |
tner/xlm-roberta-large-panx-dataset-ko | eed173cccd2a276b418c3be9e7539de89fce2f78 | 2021-02-13T00:05:08.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-large-panx-dataset-ko | 1 | null | transformers | 28,665 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ko")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ko")
``` |
tner/xlm-roberta-large-uncased-bionlp2004 | c828d3ee06bebaab5e825d51d4297c119934c4bf | 2021-02-13T00:05:40.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-large-uncased-bionlp2004 | 1 | null | transformers | 28,666 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-bionlp2004")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-bionlp2004")
``` |
tner/xlm-roberta-large-uncased-conll2003 | 67880b85888913cc4c1f3932a63032706ae527e7 | 2021-02-13T00:11:51.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-large-uncased-conll2003 | 1 | null | transformers | 28,667 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-conll2003")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-conll2003")
``` |
tner/xlm-roberta-large-uncased-mit-restaurant | 9261f8aa32fcfcdf75e582a842f324c4b3ac28a9 | 2021-02-13T00:06:06.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-large-uncased-mit-restaurant | 1 | null | transformers | 28,668 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-mit-restaurant")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-mit-restaurant")
``` |
tner/xlm-roberta-large-uncased-panx-dataset-en | ac489112a1976f0676fc8983be6b1cd85b7dc68b | 2021-02-13T00:06:19.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-large-uncased-panx-dataset-en | 1 | null | transformers | 28,669 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-panx-dataset-en")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-panx-dataset-en")
``` |
tner/xlm-roberta-large-wnut2017 | 6de5f0a4823ba9e61de7e89e1f8f7a42a8429549 | 2021-02-13T00:06:30.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-large-wnut2017 | 1 | null | transformers | 28,670 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-wnut2017")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-wnut2017")
``` |
asakawa/distilgpt2-finetuned-wikitext2 | 065cc225fd17948f22fd53961c63e746706f299f | 2022-01-06T07:50:50.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | asakawa | null | asakawa/distilgpt2-finetuned-wikitext2 | 1 | null | transformers | 28,671 | Entry not found |
asakawa/gpt2-wikitext2 | 5f301c1ebacc8af38a5dd83d11e0f9e608cfa9fd | 2022-01-06T02:41:39.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | asakawa | null | asakawa/gpt2-wikitext2 | 1 | null | transformers | 28,672 | Entry not found |
asapp/sew-d-base-100k | baa619cd5e0d12bcd4ed2a31f37cb0813f52f04d | 2021-10-28T13:44:39.000Z | [
"pytorch",
"sew-d",
"feature-extraction",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"transformers",
"speech",
"license:apache-2.0"
] | feature-extraction | false | asapp | null | asapp/sew-d-base-100k | 1 | null | transformers | 28,673 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-base
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
asapp/sew-d-mid-400k | 0b74b6e7f270b1b95e27003d45f85ae105a15e62 | 2021-10-28T13:59:38.000Z | [
"pytorch",
"sew-d",
"feature-extraction",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"transformers",
"speech",
"license:apache-2.0"
] | feature-extraction | false | asapp | null | asapp/sew-d-mid-400k | 1 | 1 | transformers | 28,674 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-mid
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
asapp/sew-d-mid-k127-100k | 094aae1a58c27399314f6db1b9d8bd628a66b758 | 2021-10-28T14:01:21.000Z | [
"pytorch",
"sew-d",
"feature-extraction",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"transformers",
"speech",
"license:apache-2.0"
] | feature-extraction | false | asapp | null | asapp/sew-d-mid-k127-100k | 1 | null | transformers | 28,675 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-mid
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
asapp/sew-d-mid-k127-400k-ft-ls100h | a7cd98a3eca1f685a3223e5deae1bc2cce1f305d | 2022-05-24T13:09:50.000Z | [
"pytorch",
"sew-d",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"transformers",
"audio",
"speech",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | asapp | null | asapp/sew-d-mid-k127-400k-ft-ls100h | 1 | null | transformers | 28,676 | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- speech
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: sew-d-mid-k127-400k-ft-ls100h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 4.99
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 10.95
---
# SEW-D-mid-k127
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, SEWDForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load the model and preprocessor
processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-mid-k127-400k-ft-ls100h")
model = SEWDForCTC.from_pretrained("asapp/sew-d-mid-k127-400k-ft-ls100h")
# load the dummy dataset with speech samples
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# preprocess
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **asapp/sew-d-mid-k127-400k-ft-ls100hh** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import SEWDForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = SEWDForCTC.from_pretrained("asapp/sew-d-mid-k127-400k-ft-ls100h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-mid-k127-400k-ft-ls100h")
def map_to_pred(batch):
input_values = processor(batch["audio"][0]["array"], sampling_rate=16000,
return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
| --- | --- |
| 4.99 | 10.95 |
|
asapp/sew-d-mid-k127-400k | 051e112e1e7b4fb16b156167e06cc54eb395bbc2 | 2021-10-28T14:04:35.000Z | [
"pytorch",
"sew-d",
"feature-extraction",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"transformers",
"speech",
"license:apache-2.0"
] | feature-extraction | false | asapp | null | asapp/sew-d-mid-k127-400k | 1 | null | transformers | 28,677 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-mid
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
asheads/PredreamBERT | 00546ec02ac6a8c2ca8223d528ae1e187a531d06 | 2022-02-19T17:13:42.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | asheads | null | asheads/PredreamBERT | 1 | null | transformers | 28,678 | Entry not found |
ashwani-tanwar/Gujarati-XLM-R-Base | 892ae30c8b57428e02c60ba95fbfc9a26a5cd5e1 | 2020-12-11T21:34:15.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"gu",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ashwani-tanwar | null | ashwani-tanwar/Gujarati-XLM-R-Base | 1 | null | transformers | 28,679 | ---
language: gu
---
# Gujarati-XLM-R-Base
This model is finetuned over [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base) (XLM-R) using its base variant with the Gujarati language using the [OSCAR](https://oscar-corpus.com/) monolingual dataset. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model.
## Dataset
OSCAR corpus contains several diverse datasets for different languages. We followed the work of [CamemBERT](https://www.aclweb.org/anthology/2020.acl-main.645/) who reported better performance with this diverse dataset as compared to the other large homogenous datasets.
## Preprocessing and Training Procedure
Please visit [this link](https://github.com/ashwanitanwar/nmt-transfer-learning-xlm-r#6-finetuning-xlm-r) for the detailed procedure.
## Usage
- This model can be used for further finetuning for different NLP tasks using the Gujarati language.
- It can be used to generate contextualised word representations for the Gujarati words.
- It can be used for domain adaptation.
- It can be used to predict the missing words from the Gujarati sentences.
## Demo
### Using the model to predict missing words
```
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ashwani-tanwar/Gujarati-XLM-R-Base')
pred_word = unmasker("અમદાવાદ એ ગુજરાતનું એક <mask> છે.")
print(pred_word)
```
```
[{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક શહેર છે.</s>', 'score': 0.9463568329811096, 'token': 85227, 'token_str': '▁શહેર'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક ગામ છે.</s>', 'score': 0.013311690650880337, 'token': 66346, 'token_str': '▁ગામ'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એકનગર છે.</s>', 'score': 0.012945962138473988, 'token': 69702, 'token_str': 'નગર'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક સ્થળ છે.</s>', 'score': 0.0045941537246108055, 'token': 135436, 'token_str': '▁સ્થળ'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક મહત્વ છે.</s>', 'score': 0.00402021361514926, 'token': 126763, 'token_str': '▁મહત્વ'}]
```
### Using the model to generate contextualised word representations
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("ashwani-tanwar/Gujarati-XLM-R-Base")
model = AutoModel.from_pretrained("ashwani-tanwar/Gujarati-XLM-R-Base")
sentence = "અમદાવાદ એ ગુજરાતનું એક શહેર છે."
encoded_sentence = tokenizer(sentence, return_tensors='pt')
context_word_rep = model(**encoded_sentence)
```
|
ashwani-tanwar/Gujarati-XLM-R-Large | 0d969e4113b2ba5dc4dd10b726e1ea97ae9a9f85 | 2020-12-12T01:39:10.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"gu",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ashwani-tanwar | null | ashwani-tanwar/Gujarati-XLM-R-Large | 1 | null | transformers | 28,680 | ---
language: gu
---
# Gujarati-XLM-R-Large
This model is finetuned over [XLM-RoBERTa](https://huggingface.co/xlm-roberta-large) (XLM-R) using its large variant with the Gujarati language using the [OSCAR](https://oscar-corpus.com/) monolingual dataset. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model.
## Dataset
OSCAR corpus contains several diverse datasets for different languages. We followed the work of [CamemBERT](https://www.aclweb.org/anthology/2020.acl-main.645/) who reported better performance with this diverse dataset as compared to the other large homogenous datasets.
## Preprocessing and Training Procedure
Please visit [this link](https://github.com/ashwanitanwar/nmt-transfer-learning-xlm-r#6-finetuning-xlm-r) for the detailed procedure.
## Usage
- This model can be used for further finetuning for different NLP tasks using the Gujarati language.
- It can be used to generate contextualised word representations for the Gujarati words.
- It can be used for domain adaptation.
- It can be used to predict the missing words from the Gujarati sentences.
## Demo
### Using the model to predict missing words
```
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ashwani-tanwar/Gujarati-XLM-R-Large')
pred_word = unmasker("અમદાવાદ એ ગુજરાતનું એક <mask> છે.")
print(pred_word)
```
```
[{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક શહેર છે.</s>', 'score': 0.9790881276130676, 'token': 85227, 'token_str': '▁શહેર'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક રાજ્ય છે.</s>', 'score': 0.004246668424457312, 'token': 63678, 'token_str': '▁રાજ્ય'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક ગામ છે.</s>', 'score': 0.0038021174259483814, 'token': 66346, 'token_str': '▁ગામ'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક મહત્વ છે.</s>', 'score': 0.002798238070681691, 'token': 126763, 'token_str': '▁મહત્વ'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક અમદાવાદ છે.</s>', 'score': 0.0021192911081016064, 'token': 69499, 'token_str': '▁અમદાવાદ'}]
```
### Using the model to generate contextualised word representations
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("ashwani-tanwar/Gujarati-XLM-R-Large")
model = AutoModel.from_pretrained("ashwani-tanwar/Gujarati-XLM-R-Large")
sentence = "અમદાવાદ એ ગુજરાતનું એક શહેર છે."
encoded_sentence = tokenizer(sentence, return_tensors='pt')
context_word_rep = model(**encoded_sentence)
```
|
ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base | 7ccc8fe2e10d5840dda04fb01e5794ce0dd7db9e | 2020-12-12T02:22:48.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"gu",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ashwani-tanwar | null | ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base | 1 | null | transformers | 28,681 | ---
language: gu
---
# Gujarati-in-Devanagari-XLM-R-Base
This model is finetuned over [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base) (XLM-R) using its base variant with the Gujarati language using the [OSCAR](https://oscar-corpus.com/) monolingual dataset. We converted the Gujarati script to the Devanagari using [Indic-NLP](https://github.com/anoopkunchukuttan/indic_nlp_library) library. For example, the sentence 'અમદાવાદ એ ગુજરાતનું એક શહેર છે.' was converted to 'अमदावाद ए गुजरातनुं एक शहेर छे.'. This helped to get better contextualised representations for some words as the XLM-R was pre-trained with several languages written in Devanagari script such as Hindi, Marathi, Sanskrit, and so on.
We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model.
## Dataset
OSCAR corpus contains several diverse datasets for different languages. We followed the work of [CamemBERT](https://www.aclweb.org/anthology/2020.acl-main.645/) who reported better performance with this diverse dataset as compared to the other large homogenous datasets.
## Preprocessing and Training Procedure
Please visit [this link](https://github.com/ashwanitanwar/nmt-transfer-learning-xlm-r#6-finetuning-xlm-r) for the detailed procedure.
## Usage
- This model can be used for further finetuning for different NLP tasks using the Gujarati language.
- It can be used to generate contextualised word representations for the Gujarati words.
- It can be used for domain adaptation.
- It can be used to predict the missing words from the Gujarati sentences.
## Demo
### Using the model to predict missing words
```
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base')
pred_word = unmasker("अमदावाद ए गुजरातनुं एक <mask> छे.")
print(pred_word)
```
```
[{'sequence': '<s> अमदावाद ए गुजरातनुं एक नगर छे.</s>', 'score': 0.24843722581863403, 'token': 18576, 'token_str': '▁नगर'},
{'sequence': '<s> अमदावाद ए गुजरातनुं एक महानगर छे.</s>', 'score': 0.21455222368240356, 'token': 122519, 'token_str': '▁महानगर'},
{'sequence': '<s> अमदावाद ए गुजरातनुं एक राज्य छे.</s>', 'score': 0.16832049190998077, 'token': 10665, 'token_str': '▁राज्य'},
{'sequence': '<s> अमदावाद ए गुजरातनुं एक जिल्ला छे.</s>', 'score': 0.06764694303274155, 'token': 20396, 'token_str': '▁जिल्ला'},
{'sequence': '<s> अमदावाद ए गुजरातनुं एक शहर छे.</s>', 'score': 0.05364946648478508, 'token': 22770, 'token_str': '▁शहर'}]
```
### Using the model to generate contextualised word representations
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base")
model = AutoModel.from_pretrained("ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base")
sentence = "अमदावाद ए गुजरातनुं एक शहेर छे."
encoded_sentence = tokenizer(sentence, return_tensors='pt')
context_word_rep = model(**encoded_sentence)
```
|
asifm43/bert-bn | 42bc072fbabd8c4db2e1868c40ccb8a6fa4c13d1 | 2022-01-15T12:22:19.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | asifm43 | null | asifm43/bert-bn | 1 | null | transformers | 28,682 | Entry not found |
astrobreazy/DialoGPT-small-harrypotter | 23b144f02ae3b8adfa2a147c4773b42d8e075ba2 | 2022-02-14T05:56:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | astrobreazy | null | astrobreazy/DialoGPT-small-harrypotter | 1 | null | transformers | 28,683 | ---
tags:
- conversational
---
#Harry Potter DialoGPT Model |
aszidon/distilbertcustom3 | debe9b61f0462ada8cb91906e4844fa6289c85f2 | 2021-11-06T03:47:59.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aszidon | null | aszidon/distilbertcustom3 | 1 | null | transformers | 28,684 | Entry not found |
aszidon/distilbertcustom4 | 692eff779e58d364612755f3966839ac3f833377 | 2021-11-08T01:33:03.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aszidon | null | aszidon/distilbertcustom4 | 1 | null | transformers | 28,685 | Entry not found |
atharvapatil128/JakeBot | 1b4b047ea979a504ebe17be359eea0b109ceebb8 | 2021-12-03T05:23:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | atharvapatil128 | null | atharvapatil128/JakeBot | 1 | null | transformers | 28,686 | Entry not found |
atomsspawn/DialoGPT-small-dumbledore | df931add605ef655a01f71faec7bc3792b941f8b | 2022-04-12T20:36:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | atomsspawn | null | atomsspawn/DialoGPT-small-dumbledore | 1 | null | transformers | 28,687 | ---
tags:
- conversational
---
# Dumbledore DialoGPT Model |
augustojaba/DialoGPT-small-harrypotter | 3a6d061458037d880dac4104c31fe1698b0782b9 | 2021-09-02T00:59:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | augustojaba | null | augustojaba/DialoGPT-small-harrypotter | 1 | null | transformers | 28,688 | ---
tags:
- conversational
---
#Harry Potter DialoGPT Model |
avichr/ar_hd | 0588e15d88ed83d122a251f10f77954a99c61cd8 | 2021-05-19T12:01:47.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | avichr | null | avichr/ar_hd | 1 | null | transformers | 28,689 | Entry not found |
aws-ai/pairsupcon-bert-large-uncased | fbf1fb66e0799bf7fc8f925d4e37db8c2e9dd100 | 2021-12-18T19:41:42.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | aws-ai | null | aws-ai/pairsupcon-bert-large-uncased | 1 | null | transformers | 28,690 | Entry not found |
awvik360/DialoGPT-small-plemons | 2b0582b02026c1e021ccec224aede5e7fa0d08a9 | 2021-06-19T23:55:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | awvik360 | null | awvik360/DialoGPT-small-plemons | 1 | null | transformers | 28,691 | ---
tags:
- conversational
---
# My Awesome Model |
azwierzc/plt5-small-pl-to-sql | 5dab53061da5adad2cb4383c7a54271c36a1a0cd | 2022-02-13T19:42:52.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | azwierzc | null | azwierzc/plt5-small-pl-to-sql | 1 | null | transformers | 28,692 | Entry not found |
b0shakk/DialoGPT-small-Ragnar | cf9b36646f1d5ae7ba11727291d7ea022515e835 | 2021-08-31T07:39:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | b0shakk | null | b0shakk/DialoGPT-small-Ragnar | 1 | null | transformers | 28,693 | ---
tags:
- conversational
---
#Ragnar Lothbrok DialoGPT Model |
bagdaebhishek/IndianPoliticalTweetsLMMedium | bbcf0a2ec986527467afb3110a446e20d513186b | 2021-09-22T08:13:46.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:Twitter",
"dataset:IndianPolitics",
"transformers",
"India",
"politics",
"tweets",
"BJP",
"Congress",
"AAP",
"lm-head",
"license:apache-2.0"
] | text-generation | false | bagdaebhishek | null | bagdaebhishek/IndianPoliticalTweetsLMMedium | 1 | null | transformers | 28,694 | ---
language: en
thumbnail: https://bagdeabhishek.github.io/twitterAnalysis_files/networkfin.jpg
tags:
- India
- politics
- tweets
- BJP
- Congress
- AAP
- pytorch
- gpt2
- lm-head
- text-generation
license: apache-2.0
datasets:
- Twitter
- IndianPolitics
---
# Model name
Indian Political Tweets LM Medium (Based on GPT2-Medium)
## Model description
This is a GPT2 Language model with LM head fine-tuned on tweets crawled from handles which belong predominantly to Indian Politics. For more information about the crawled data, you can go through this [blog](https://bagdeabhishek.github.io/twitterAnalysis) post.
This model is finetuned using GPT2-medium instead of the vanilla GPT2 implementation. This model has more parameters but it is able to model language slightly better.
## Intended uses & limitations
This finetuned model can be used to generate tweets which are related to Indian politics.
#### How to use
```python
from transformers import AutoTokenizer,AutoModelWithLMHead,pipeline
tokenizer = AutoTokenizer.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM")
model = AutoModelWithLMHead.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM")
text_generator = pipeline("text-generation",model=model, tokenizer=tokenizer)
init_sentence = "India will always be"
print(text_generator(init_sentence))
```
#### Limitations and bias
1. The tweets used to train the model were not manually labelled, so the generated text may not always be in English. I've cleaned the data to remove non-English tweets but the model may generate "Hinglish" text and hence no assumptions should be made about the language of the generated text.
2. I've taken enough care to remove tweets from twitter handles which are not very influential but since it's not curated by hand there might be some artefacts like "-sent via NamoApp" etc.
3. Like any language model trained on real-world data this model also exhibits some biases which unfortunately are a part of the political discourse on Twitter. Please keep this in mind while using the output from this model.
## Training data
I used the pre-trained gpt2-medium model from Huggingface transformers repository and fine-tuned it on custom data set crawled from twitter. The method used to identify the political handles is mentioned in detail in a [blog](https://bagdeabhishek.github.io/twitterAnalysis) post. I used tweets from both the Pro-BJP and Anti-BJP clusters mentioned in the blog.
## Training procedure
For pre-processing, I removed tweets from handles which are not very influential in their cluster. I removed them by calculating Eigenvector centrality on the twitter graph and pruning handles which have this measure below a certain threshold. This threshold was set manually after experimenting with different values.
I then separated tweets by these handles based on their language. I trained the LM with English tweets from both handles.
### Hardware
1. GPU: GTX 1080Ti
2. CPU: Ryzen 3900x
3. RAM: 32GB
This model took roughly 36 hours to fine-tune.
|
baicuya/bert_cn | 30721f0841e7beddde3cb43df24308432012a388 | 2021-06-27T13:37:59.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | baicuya | null | baicuya/bert_cn | 1 | null | transformers | 28,695 | hello
|
balta/DialoGPT-small-TestBot | cd24ee53d7a6775ebf8d4901284b2574643b4388 | 2021-09-16T21:26:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | balta | null | balta/DialoGPT-small-TestBot | 1 | null | transformers | 28,696 | ---
tags:
- conversational
---
# Test Bot DialoGTP Model |
bana513/opennmt-translator-en-hu | 1d34f28faae951a3ae275d73e1a9ef0e80a3986e | 2021-12-16T14:42:36.000Z | [
"pytorch",
"opennmt-translator6",
"transformers"
] | null | false | bana513 | null | bana513/opennmt-translator-en-hu | 1 | null | transformers | 28,697 | Entry not found |
baophuc27/tbwt_grammar | 0eca024afe8b91bbe3ed1243bb260970ffcc617b | 2021-12-11T14:51:51.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | baophuc27 | null | baophuc27/tbwt_grammar | 1 | null | transformers | 28,698 | Entry not found |
bayartsogt/wav2vec2-large-xlsr-mongolian | 8211dbbd10fc7444eac153f69266a8128e8a7472 | 2021-07-05T22:56:55.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"mn",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | bayartsogt | null | bayartsogt/wav2vec2-large-xlsr-mongolian | 1 | null | transformers | 28,699 | ---
language: mn
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Mongolian by Bayartsogt
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice mn
type: common_voice
args: mn
metrics:
- name: Test WER
type: wer
value: 45.82
---
# Wav2Vec2-Large-XLSR-53-Mongolian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "mn", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Mongolian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "mn", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“\\\\%\\\\‘\\\\”\\\\�\\\\'h\\\\«\\\\»]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 45.82%
## Training
❌ The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training.
❌ The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.