Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence-compression
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2973
- Accuracy: 0.8912
- F1: 0.8367
- Precision: 0.8495
- Recall: 0.8243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2686 | 1.0 | 10000 | 0.2667 | 0.8894 | 0.8283 | 0.8725 | 0.7884 |
| 0.2205 | 2.0 | 20000 | 0.2704 | 0.8925 | 0.8372 | 0.8579 | 0.8175 |
| 0.1476 | 3.0 | 30000 | 0.2973 | 0.8912 | 0.8367 | 0.8495 | 0.8243 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "model-index": [{"name": "sentence-compression", "results": []}]} | AlexMaclean/sentence-compression | null | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2388
- Wer: 0.3681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.3748 | 0.07 | 500 | 3.8784 | 1.0 |
| 2.8068 | 0.14 | 1000 | 2.8289 | 0.9826 |
| 1.6698 | 0.22 | 1500 | 0.8811 | 0.7127 |
| 1.3488 | 0.29 | 2000 | 0.5166 | 0.5369 |
| 1.2239 | 0.36 | 2500 | 0.4105 | 0.4741 |
| 1.1537 | 0.43 | 3000 | 0.3585 | 0.4448 |
| 1.1184 | 0.51 | 3500 | 0.3336 | 0.4292 |
| 1.0968 | 0.58 | 4000 | 0.3195 | 0.4180 |
| 1.0737 | 0.65 | 4500 | 0.3075 | 0.4141 |
| 1.0677 | 0.72 | 5000 | 0.3015 | 0.4089 |
| 1.0462 | 0.8 | 5500 | 0.2971 | 0.4077 |
| 1.0392 | 0.87 | 6000 | 0.2870 | 0.3997 |
| 1.0178 | 0.94 | 6500 | 0.2805 | 0.3963 |
| 0.992 | 1.01 | 7000 | 0.2748 | 0.3935 |
| 1.0197 | 1.09 | 7500 | 0.2691 | 0.3884 |
| 1.0056 | 1.16 | 8000 | 0.2682 | 0.3889 |
| 0.9826 | 1.23 | 8500 | 0.2647 | 0.3868 |
| 0.9815 | 1.3 | 9000 | 0.2603 | 0.3832 |
| 0.9717 | 1.37 | 9500 | 0.2561 | 0.3807 |
| 0.9605 | 1.45 | 10000 | 0.2523 | 0.3783 |
| 0.96 | 1.52 | 10500 | 0.2494 | 0.3788 |
| 0.9442 | 1.59 | 11000 | 0.2478 | 0.3760 |
| 0.9564 | 1.66 | 11500 | 0.2454 | 0.3733 |
| 0.9436 | 1.74 | 12000 | 0.2439 | 0.3747 |
| 0.938 | 1.81 | 12500 | 0.2411 | 0.3716 |
| 0.9353 | 1.88 | 13000 | 0.2397 | 0.3698 |
| 0.9271 | 1.95 | 13500 | 0.2388 | 0.3681 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["fr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "xls-r-300m-fr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 8.0 fr", "type": "mozilla-foundation/common_voice_8_0", "args": "fr"}, "metrics": [{"type": "wer", "value": 36.81, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 35.55, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 39.94, "name": "Test WER"}]}]}]} | AlexN/xls-r-300m-fr-0 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"fr",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2700
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["fr"], "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "xls-r-300m-fr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 8.0 fr", "type": "mozilla-foundation/common_voice_8_0", "args": "fr"}, "metrics": [{"type": "wer", "value": 21.58, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 36.03, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 38.86, "name": "Test WER"}]}]}]} | AlexN/xls-r-300m-fr | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"fr",
"dataset:mozilla-foundation/common_voice_8_0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2290
- Wer: 0.2382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.0952 | 0.64 | 500 | 3.0982 | 1.0 |
| 1.7975 | 1.29 | 1000 | 0.7887 | 0.5651 |
| 1.4138 | 1.93 | 1500 | 0.5238 | 0.4389 |
| 1.344 | 2.57 | 2000 | 0.4775 | 0.4318 |
| 1.2737 | 3.21 | 2500 | 0.4648 | 0.4075 |
| 1.2554 | 3.86 | 3000 | 0.4069 | 0.3678 |
| 1.1996 | 4.5 | 3500 | 0.3914 | 0.3668 |
| 1.1427 | 5.14 | 4000 | 0.3694 | 0.3572 |
| 1.1372 | 5.78 | 4500 | 0.3568 | 0.3501 |
| 1.0831 | 6.43 | 5000 | 0.3331 | 0.3253 |
| 1.1074 | 7.07 | 5500 | 0.3332 | 0.3352 |
| 1.0536 | 7.71 | 6000 | 0.3131 | 0.3152 |
| 1.0248 | 8.35 | 6500 | 0.3024 | 0.3023 |
| 1.0075 | 9.0 | 7000 | 0.2948 | 0.3028 |
| 0.979 | 9.64 | 7500 | 0.2796 | 0.2853 |
| 0.9594 | 10.28 | 8000 | 0.2719 | 0.2789 |
| 0.9172 | 10.93 | 8500 | 0.2620 | 0.2695 |
| 0.9047 | 11.57 | 9000 | 0.2537 | 0.2596 |
| 0.8777 | 12.21 | 9500 | 0.2438 | 0.2525 |
| 0.8629 | 12.85 | 10000 | 0.2409 | 0.2493 |
| 0.8575 | 13.5 | 10500 | 0.2366 | 0.2440 |
| 0.8361 | 14.14 | 11000 | 0.2317 | 0.2385 |
| 0.8126 | 14.78 | 11500 | 0.2290 | 0.2382 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["pt"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "robust-speech-event", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "xls-r-300m-pt", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 8.0 pt", "type": "mozilla-foundation/common_voice_8_0", "args": "pt"}, "metrics": [{"type": "wer", "value": 19.361, "name": "Test WER"}, {"type": "cer", "value": 5.533, "name": "Test CER"}, {"type": "wer", "value": 19.36, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 47.812, "name": "Validation WER"}, {"type": "cer", "value": 18.805, "name": "Validation CER"}, {"type": "wer", "value": 48.01, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "pt"}, "metrics": [{"type": "wer", "value": 49.21, "name": "Test WER"}]}]}]} | AlexN/xls-r-300m-pt | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"robust-speech-event",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hf-asr-leaderboard",
"pt",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {"license": "cc"} | AlexaMerens/Owl | null | [
"license:cc",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | AlexaRyck/KEITH | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | Alexander-Learn/bert-finetuned-ner-accelerate | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | Alexander-Learn/bert-finetuned-ner | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Alexander-Learn/bert-finetuned-squad-accelerate | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers | {} | Alexander-Learn/bert-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Alexandru/creative_copilot | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | AlexeyIgnatov/albert-xlarge-v2-squad-v2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | AlexeyYazev/my-awesome-model | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Alfia/anekdotes | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | AliPotter24/a | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Alicanke/Wyau | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Alifarsi/t5-small-finetuned-xsum | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Aliraza47/BERT | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Alireza-rw/testbot | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cola
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7552
- Matthews Correlation: 0.5495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
| {"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model_index": [{"name": "cola", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE COLA", "type": "glue", "args": "cola"}, "metric": {"name": "Matthews Correlation", "type": "matthews_correlation", "value": 0.5494768667363472}}]}]} | Alireza1044/albert-base-v2-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mnli
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5383
- Accuracy: 0.8501
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
| {"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model_index": [{"name": "mnli", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE MNLI", "type": "glue", "args": "mnli"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.8500813669650122}}]}]} | Alireza1044/albert-base-v2-mnli | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mrpc
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4171
- Accuracy: 0.8627
- F1: 0.9011
- Combined Score: 0.8819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
| {"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy", "f1"], "model_index": [{"name": "mrpc", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE MRPC", "type": "glue", "args": "mrpc"}, "metric": {"name": "F1", "type": "f1", "value": 0.901060070671378}}]}]} | Alireza1044/albert-base-v2-mrpc | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qnli
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3608
- Accuracy: 0.9138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
| {"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model_index": [{"name": "qnli", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE QNLI", "type": "glue", "args": "qnli"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9137836353651839}}]}]} | Alireza1044/albert-base-v2-qnli | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qqp
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3695
- Accuracy: 0.9050
- F1: 0.8723
- Combined Score: 0.8886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
| {"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy", "f1"], "model_index": [{"name": "qqp", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE QQP", "type": "glue", "args": "qqp"}, "metric": {"name": "F1", "type": "f1", "value": 0.8722569490623753}}]}]} | Alireza1044/albert-base-v2-qqp | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rte
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7994
- Accuracy: 0.6859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
| {"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model_index": [{"name": "rte", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE RTE", "type": "glue", "args": "rte"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.6859205776173285}}]}]} | Alireza1044/albert-base-v2-rte | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sst2
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3808
- Accuracy: 0.9232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
| {"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model_index": [{"name": "sst2", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE SST2", "type": "glue", "args": "sst2"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9231651376146789}}]}]} | Alireza1044/albert-base-v2-sst2 | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stsb
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3978
- Pearson: 0.9090
- Spearmanr: 0.9051
- Combined Score: 0.9071
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
| {"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["spearmanr"], "model_index": [{"name": "stsb", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE STSB", "type": "glue", "args": "stsb"}, "metric": {"name": "Spearmanr", "type": "spearmanr", "value": 0.9050744778895732}}]}]} | Alireza1044/albert-base-v2-stsb | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wnli
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6898
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
| {"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model_index": [{"name": "wnli", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE WNLI", "type": "glue", "args": "wnli"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.5633802816901409}}]}]} | Alireza1044/albert-base-v2-wnli | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | A simple model trained on dialogues of characters in NBC series, `The Office`. The model can do a binary classification between `Michael Scott` and `Dwight Shrute`'s dialogues.
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}
</style>
<table class="tg">
<thead>
<tr>
<th class="tg-c3ow" colspan="2">Label Definitions</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-c3ow">Label 0</td>
<td class="tg-c3ow">Michael</td>
</tr>
<tr>
<td class="tg-c3ow">Label 1</td>
<td class="tg-c3ow">Dwight</td>
</tr>
</tbody>
</table> | {} | Alireza1044/bert_classification_lm | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | {} | Alireza1044/dwight_bert_lm | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | {} | Alireza1044/michael_bert_lm | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers | {} | AlirezaBaneshi/testPersianQA | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Aliskin/xlm-roberta-base-finetuned-marc | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Aliyyu/Keren | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
#HarryBoy | {"tags": ["conversational"]} | AllwynJ/HarryBoy | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Allybaby21/Allysai | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart50-ft-si-en
This model was trained from scratch on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.98 | 30 | 5.6367 |
| No log | 1.98 | 60 | 4.1221 |
| No log | 2.98 | 90 | 3.1880 |
| No log | 3.98 | 120 | 3.1175 |
| No log | 4.98 | 150 | 3.3575 |
| No log | 5.98 | 180 | 3.7855 |
| No log | 6.98 | 210 | 4.3530 |
| No log | 7.98 | 240 | 4.7216 |
| No log | 8.98 | 270 | 4.9202 |
| No log | 9.98 | 300 | 5.0476 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.6.0
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model_index": [{"name": "mbart50-ft-si-en", "results": [{"task": {"name": "Sequence-to-sequence Language Modeling", "type": "text2text-generation"}}]}]} | Aloka/mbart50-ft-si-en | null | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7272
- Matthews Correlation: 0.5343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5219 | 1.0 | 535 | 0.5340 | 0.4215 |
| 0.3467 | 2.0 | 1070 | 0.5131 | 0.5181 |
| 0.2331 | 3.0 | 1605 | 0.6406 | 0.5040 |
| 0.1695 | 4.0 | 2140 | 0.7272 | 0.5343 |
| 0.1212 | 5.0 | 2675 | 0.8399 | 0.5230 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5343023846000738, "name": "Matthews Correlation"}]}]}]} | Alstractor/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Altidore/DuggFace | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | transformers |
# Wav2vec2-base for Danish
This wav2vec2-base model has been pretrained on ~1300 hours of danish speech data. The pretraining data consists of podcasts and audiobooks and is unfortunately not public available. However, we were allowed to distribute the pretrained model.
This model was pretrained on 16kHz sampled speech audio. When using the model, make sure to use speech audio sampled at 16kHz.
The pre-training was done using the fairseq library in January 2021.
It needs to be fine-tuned to perform speech recognition.
# Finetuning
In order to finetune the model to speech recognition, you can draw inspiration from this [notebook tutorial](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F) or [this blog post tutorial](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2). | {"language": "da", "license": "apache-2.0", "tags": ["speech"]} | Alvenir/wav2vec2-base-da | null | [
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"speech",
"da",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Amalq/distilroberta-base-finetuned-MentalHealth | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Amalq/distilroberta-base-finetuned-anxiety-depression | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-schizophreniaReddit2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 490 | 1.8093 |
| 1.9343 | 2.0 | 980 | 1.7996 |
| 1.8856 | 3.0 | 1470 | 1.7966 |
| 1.8552 | 4.0 | 1960 | 1.7844 |
| 1.8267 | 5.0 | 2450 | 1.7839 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "roberta-base-finetuned-schizophreniaReddit2", "results": []}]} | Amalq/roberta-base-finetuned-schizophreniaReddit2 | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | {} | AmanPriyanshu/DistilBert-Sentiment-Analysis | null | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers |
# Question Answering NLU
Question Answering NLU (QANLU) is an approach that maps the NLU task into question answering,
leveraging pre-trained question-answering models to perform well on few-shot settings. Instead of
training an intent classifier or a slot tagger, for example, we can ask the model intent- and
slot-related questions in natural language:
```
Context : Yes. No. I'm looking for a cheap flight to Boston.
Question: Is the user looking to book a flight?
Answer : Yes
Question: Is the user asking about departure time?
Answer : No
Question: What price is the user looking for?
Answer : cheap
Question: Where is the user flying from?
Answer : (empty)
```
Note the "Yes. No. " prepended in the context. Those are to allow the model to answer intent-related questions (e.g. "Is the user looking for a restaurant?").
Thus, by asking questions for each intent and slot in natural language, we can effectively construct an NLU hypothesis. For more details, please read the paper: [Language model is all you need: Natural language understanding as question answering](https://assets.amazon.science/33/ea/800419b24a09876601d8ab99bfb9/language-model-is-all-you-need-natural-language-understanding-as-question-answering.pdf).
## Model training
Instructions for how to train and evaluate a QANLU model, as well as the necessary code for ATIS are in the [Amazon Science repository](https://github.com/amazon-research/question-answering-nlu).
## Intended use and limitations
This model has been fine-tuned on ATIS (English) and is intended to demonstrate the power of this approach. For other domains or tasks, it should be further fine-tuned
on relevant data.
## Use in transformers:
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
tokenizer = AutoTokenizer.from_pretrained("AmazonScience/qanlu", use_auth_token=True)
model = AutoModelForQuestionAnswering.from_pretrained("AmazonScience/qanlu", use_auth_token=True)
qa_pipeline = pipeline('question-answering', model=model, tokenizer=tokenizer)
qa_input = {
'context': 'Yes. No. I want a cheap flight to Boston.',
'question': 'What is the destination?'
}
answer = qa_pipeline(qa_input)
```
## Citation
If you use this work, please cite:
```
@inproceedings{namazifar2021language,
title={Language model is all you need: Natural language understanding as question answering},
author={Namazifar, Mahdi and Papangelis, Alexandros and Tur, Gokhan and Hakkani-T{\"u}r, Dilek},
booktitle={ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7803--7807},
year={2021},
organization={IEEE}
}
```
## License
This library is licensed under the CC BY NC License. | {"language": "en", "license": "cc-by-4.0", "datasets": ["atis"], "widget": [{"context": "Yes. No. I'm looking for a cheap flight to Boston."}]} | AmazonScience/qanlu | null | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"en",
"dataset:atis",
"license:cc-by-4.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Amba/wav2vec2-large-xls-r-300m-tr-colab | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Amba/wav2vec2-large-xls-r-300m-turkish-colab | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | aisoftware/Loquela | null | [
"onnx",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Amir99/toxic | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | AmirBialer/amirbialer-Classifier | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | AmirHussein/test | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | AmirServi/MyModel | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {} | Amirosein/distilbert_v1 | null | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {} | Amirosein/roberta | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Amit29/t5-small-finetuned-xsum | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | AmitT/test | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Amitabh/doc-classification | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Amro-Kamal/gpt | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
image-classification | transformers |
# indian-foods
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### idli

#### kachori

#### pani puri

#### samosa

#### vada pav
 | {"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]} | Amrrs/indian-foods | null | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
image-classification | transformers |
# south-indian-foods
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### dosai

#### idiyappam

#### idli

#### puttu

#### vadai
 | {"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]} | Amrrs/south-indian-foods | null | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Tamil
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ta", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil")
model = Wav2Vec2ForCTC.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ta", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil")
model = Wav2Vec2ForCTC.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 82.94 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://colab.research.google.com/drive/1-Klkgr4f-C9SanHfVC5RhP0ELUH6TYlN?usp=sharing)
| {"language": "ta", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Tamil by Amrrs", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ta", "type": "common_voice", "args": "ta"}, "metrics": [{"type": "wer", "value": 82.94, "name": "Test WER"}]}]}]} | Amrrs/wav2vec2-large-xlsr-53-tamil | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ta",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Ana1315/A | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Ana1315/ana | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | AnaRhisT/bert_sequence_cs_validation | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Analufm/Ana | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 479512837
- CO2 Emissions (in grams): 123.88023112815048
## Validation Metrics
- Loss: 0.6220805048942566
- Accuracy: 0.7961119332705503
- Macro F1: 0.7616345204219084
- Micro F1: 0.7961119332705503
- Weighted F1: 0.795387503907883
- Macro Precision: 0.782839455262034
- Micro Precision: 0.7961119332705503
- Weighted Precision: 0.7992606754484262
- Macro Recall: 0.7451485972167191
- Micro Recall: 0.7961119332705503
- Weighted Recall: 0.7961119332705503
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Anamika/autonlp-Feedback1-479512837
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Anamika/autonlp-Feedback1-479512837", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Anamika/autonlp-Feedback1-479512837", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "unk", "tags": "autonlp", "datasets": ["Anamika/autonlp-data-Feedback1"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 123.88023112815048} | Anamika/autonlp-Feedback1-479512837 | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autonlp",
"unk",
"dataset:Anamika/autonlp-data-Feedback1",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 473312409
- CO2 Emissions (in grams): 25.128735714898614
## Validation Metrics
- Loss: 0.6010786890983582
- Accuracy: 0.7990650945370823
- Macro F1: 0.7429662929144928
- Micro F1: 0.7990650945370823
- Weighted F1: 0.7977660363770382
- Macro Precision: 0.7744390888231261
- Micro Precision: 0.7990650945370823
- Weighted Precision: 0.800444194278352
- Macro Recall: 0.7198278524814119
- Micro Recall: 0.7990650945370823
- Weighted Recall: 0.7990650945370823
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Anamika/autonlp-fa-473312409
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Anamika/autonlp-fa-473312409", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Anamika/autonlp-fa-473312409", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["Anamika/autonlp-data-fa"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 25.128735714898614} | Anamika/autonlp-fa-473312409 | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:Anamika/autonlp-data-fa",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Anders/itu-ams-summa | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Andi/bert-tt-ner-1 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Andranik/TestPytorchClassification | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra_large_discriminator_squad2_512
This model is a fine-tuned version of [ahotrod/electra_large_discriminator_squad2_512](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "electra_large_discriminator_squad2_512", "results": []}]} | Andranik/TestQA2 | null | [
"transformers",
"pytorch",
"electra",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers | {} | Andranik/TestQaV1 | null | [
"transformers",
"pytorch",
"rust",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | AndreLiu1225/t5-news-summarizer | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | This is a pretrained model that was loaded from t5-base. It has been adapted and changed by changing the max_length and summary_length. | {} | AndreLiu1225/t5-news | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Andres2015/HiggingFaceTest | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# model-QA-5-epoch-RU
This model is a fine-tuned version of [AndrewChar/diplom-prod-epoch-4-datast-sber-QA](https://huggingface.co/AndrewChar/diplom-prod-epoch-4-datast-sber-QA) on sberquad
dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1991
- Validation Loss: 0.0
- Epoch: 5
## Model description
Модель отвечающая на вопрос по контектсу
это дипломная работа
## Intended uses & limitations
Контекст должен содержать не более 512 токенов
## Training and evaluation data
DataSet SberSQuAD
{'exact_match': 54.586, 'f1': 73.644}
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_re': 2e-06 'decay_steps': 2986, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.1991 | | 5 |
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"language": "ru", "tags": ["generated_from_keras_callback"], "datasets": ["sberquad"], "model-index": [{"name": "model-QA-5-epoch-RU", "results": []}]} | AndrewChar/model-QA-5-epoch-RU | null | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"ru",
"dataset:sberquad",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - DE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1355
- Wer: 0.1532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 2.5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.0826 | 0.07 | 1000 | 0.4637 | 0.4654 |
| 1.118 | 0.15 | 2000 | 0.2595 | 0.2687 |
| 1.1268 | 0.22 | 3000 | 0.2635 | 0.2661 |
| 1.0919 | 0.29 | 4000 | 0.2417 | 0.2566 |
| 1.1013 | 0.37 | 5000 | 0.2414 | 0.2567 |
| 1.0898 | 0.44 | 6000 | 0.2546 | 0.2731 |
| 1.0808 | 0.51 | 7000 | 0.2399 | 0.2535 |
| 1.0719 | 0.59 | 8000 | 0.2353 | 0.2528 |
| 1.0446 | 0.66 | 9000 | 0.2427 | 0.2545 |
| 1.0347 | 0.73 | 10000 | 0.2266 | 0.2402 |
| 1.0457 | 0.81 | 11000 | 0.2290 | 0.2448 |
| 1.0124 | 0.88 | 12000 | 0.2295 | 0.2448 |
| 1.025 | 0.95 | 13000 | 0.2138 | 0.2345 |
| 1.0107 | 1.03 | 14000 | 0.2108 | 0.2294 |
| 0.9758 | 1.1 | 15000 | 0.2019 | 0.2204 |
| 0.9547 | 1.17 | 16000 | 0.2000 | 0.2178 |
| 0.986 | 1.25 | 17000 | 0.2018 | 0.2200 |
| 0.9588 | 1.32 | 18000 | 0.1992 | 0.2138 |
| 0.9413 | 1.39 | 19000 | 0.1898 | 0.2049 |
| 0.9339 | 1.47 | 20000 | 0.1874 | 0.2056 |
| 0.9268 | 1.54 | 21000 | 0.1797 | 0.1976 |
| 0.9194 | 1.61 | 22000 | 0.1743 | 0.1905 |
| 0.8987 | 1.69 | 23000 | 0.1738 | 0.1932 |
| 0.8884 | 1.76 | 24000 | 0.1703 | 0.1873 |
| 0.8939 | 1.83 | 25000 | 0.1633 | 0.1831 |
| 0.8629 | 1.91 | 26000 | 0.1549 | 0.1750 |
| 0.8607 | 1.98 | 27000 | 0.1550 | 0.1738 |
| 0.8316 | 2.05 | 28000 | 0.1512 | 0.1709 |
| 0.8321 | 2.13 | 29000 | 0.1481 | 0.1657 |
| 0.825 | 2.2 | 30000 | 0.1446 | 0.1627 |
| 0.8115 | 2.27 | 31000 | 0.1396 | 0.1583 |
| 0.7959 | 2.35 | 32000 | 0.1389 | 0.1569 |
| 0.7835 | 2.42 | 33000 | 0.1362 | 0.1545 |
| 0.7959 | 2.49 | 34000 | 0.1355 | 0.1531 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1B-german --dataset mozilla-foundation/common_voice_8_0 --config de --split test --log_outputs
```
2. To evaluate on test dev data
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1B-german --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0
``` | {"language": ["de"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "de", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300M - German", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "de"}, "metrics": [{"type": "wer", "value": 15.25, "name": "Test WER"}, {"type": "cer", "value": 3.78, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "de"}, "metrics": [{"type": "wer", "value": 35.29, "name": "Test WER"}, {"type": "cer", "value": 13.83, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "de"}, "metrics": [{"type": "wer", "value": 36.2, "name": "Test WER"}]}]}]} | AndrewMcDowell/wav2vec2-xls-r-1B-german | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"de",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1373
- Wer: 0.8607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.2416 | 0.84 | 500 | 1.2867 | 0.8875 |
| 2.3089 | 1.67 | 1000 | 1.8336 | 0.9548 |
| 2.3614 | 2.51 | 1500 | 1.5937 | 0.9469 |
| 2.5234 | 3.35 | 2000 | 1.9765 | 0.9867 |
| 2.5373 | 4.19 | 2500 | 1.9062 | 0.9916 |
| 2.5703 | 5.03 | 3000 | 1.9772 | 0.9915 |
| 2.4656 | 5.86 | 3500 | 1.8083 | 0.9829 |
| 2.4339 | 6.7 | 4000 | 1.7548 | 0.9752 |
| 2.344 | 7.54 | 4500 | 1.6146 | 0.9638 |
| 2.2677 | 8.38 | 5000 | 1.5105 | 0.9499 |
| 2.2074 | 9.21 | 5500 | 1.4191 | 0.9357 |
| 2.3768 | 10.05 | 6000 | 1.6663 | 0.9665 |
| 2.3804 | 10.89 | 6500 | 1.6571 | 0.9720 |
| 2.3237 | 11.72 | 7000 | 1.6049 | 0.9637 |
| 2.317 | 12.56 | 7500 | 1.5875 | 0.9655 |
| 2.2988 | 13.4 | 8000 | 1.5357 | 0.9603 |
| 2.2906 | 14.24 | 8500 | 1.5637 | 0.9592 |
| 2.2848 | 15.08 | 9000 | 1.5326 | 0.9537 |
| 2.2381 | 15.91 | 9500 | 1.5631 | 0.9508 |
| 2.2072 | 16.75 | 10000 | 1.4565 | 0.9395 |
| 2.197 | 17.59 | 10500 | 1.4304 | 0.9406 |
| 2.198 | 18.43 | 11000 | 1.4230 | 0.9382 |
| 2.1668 | 19.26 | 11500 | 1.3998 | 0.9315 |
| 2.1498 | 20.1 | 12000 | 1.3920 | 0.9258 |
| 2.1244 | 20.94 | 12500 | 1.3584 | 0.9153 |
| 2.0953 | 21.78 | 13000 | 1.3274 | 0.9054 |
| 2.0762 | 22.61 | 13500 | 1.2933 | 0.9073 |
| 2.0587 | 23.45 | 14000 | 1.2516 | 0.8944 |
| 2.0363 | 24.29 | 14500 | 1.2214 | 0.8902 |
| 2.0302 | 25.13 | 15000 | 1.2087 | 0.8871 |
| 2.0071 | 25.96 | 15500 | 1.1953 | 0.8786 |
| 1.9882 | 26.8 | 16000 | 1.1738 | 0.8712 |
| 1.9772 | 27.64 | 16500 | 1.1647 | 0.8672 |
| 1.9585 | 28.48 | 17000 | 1.1459 | 0.8635 |
| 1.944 | 29.31 | 17500 | 1.1414 | 0.8616 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["ar"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]} | AndrewMcDowell/wav2vec2-xls-r-1b-arabic | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ar",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - JA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5500
- Wer: 1.0132
- Cer: 0.1609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.7019 | 12.65 | 1000 | 1.0510 | 0.9832 | 0.2589 |
| 1.6385 | 25.31 | 2000 | 0.6670 | 0.9915 | 0.1851 |
| 1.4344 | 37.97 | 3000 | 0.6183 | 1.0213 | 0.1797 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana --dataset mozilla-foundation/common_voice_8_0 --config ja --split test --log_outputs
```
2. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0
``` | {"language": ["ja"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "ja", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ja"}, "metrics": [{"type": "wer", "value": 95.33, "name": "Test WER"}, {"type": "cer", "value": 22.27, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "de"}, "metrics": [{"type": "wer", "value": 100.0, "name": "Test WER"}, {"type": "cer", "value": 30.33, "name": "Test CER"}, {"type": "cer", "value": 29.63, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ja"}, "metrics": [{"type": "cer", "value": 32.69, "name": "Test CER"}]}]}]} | AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"ja",
"hf-asr-leaderboard",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4502
- Wer: 0.4783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.7972 | 0.43 | 500 | 5.1401 | 1.0 |
| 3.3241 | 0.86 | 1000 | 3.3220 | 1.0 |
| 3.1432 | 1.29 | 1500 | 3.0806 | 0.9999 |
| 2.9297 | 1.72 | 2000 | 2.5678 | 1.0057 |
| 2.2593 | 2.14 | 2500 | 1.1068 | 0.8218 |
| 2.0504 | 2.57 | 3000 | 0.7878 | 0.7114 |
| 1.937 | 3.0 | 3500 | 0.6955 | 0.6450 |
| 1.8491 | 3.43 | 4000 | 0.6452 | 0.6304 |
| 1.803 | 3.86 | 4500 | 0.5961 | 0.6042 |
| 1.7545 | 4.29 | 5000 | 0.5550 | 0.5748 |
| 1.7045 | 4.72 | 5500 | 0.5374 | 0.5743 |
| 1.6733 | 5.15 | 6000 | 0.5337 | 0.5404 |
| 1.6761 | 5.57 | 6500 | 0.5054 | 0.5266 |
| 1.655 | 6.0 | 7000 | 0.4926 | 0.5243 |
| 1.6252 | 6.43 | 7500 | 0.4946 | 0.5183 |
| 1.6209 | 6.86 | 8000 | 0.4915 | 0.5194 |
| 1.5772 | 7.29 | 8500 | 0.4725 | 0.5104 |
| 1.5602 | 7.72 | 9000 | 0.4726 | 0.5097 |
| 1.5783 | 8.15 | 9500 | 0.4667 | 0.4956 |
| 1.5442 | 8.58 | 10000 | 0.4685 | 0.4937 |
| 1.5597 | 9.01 | 10500 | 0.4708 | 0.4957 |
| 1.5406 | 9.43 | 11000 | 0.4539 | 0.4810 |
| 1.5274 | 9.86 | 11500 | 0.4502 | 0.4783 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
| {"language": ["ar"], "license": "apache-2.0", "tags": ["ar", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Arabic", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "ar"}, "metrics": [{"type": "wer", "value": 47.54, "name": "Test WER"}, {"type": "cer", "value": 17.64, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ar"}, "metrics": [{"type": "wer", "value": 93.72, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ar"}, "metrics": [{"type": "wer", "value": 92.49, "name": "Test WER"}]}]}]} | AndrewMcDowell/wav2vec2-xls-r-300m-arabic | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment.
eval results:
WER: 0.20161578657865786
CER: 0.05062357805269733
-->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - DE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1768
- Wer: 0.2016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.7531 | 0.04 | 500 | 5.4564 | 1.0 |
| 2.9882 | 0.08 | 1000 | 3.0041 | 1.0 |
| 2.1953 | 0.13 | 1500 | 1.1723 | 0.7121 |
| 1.2406 | 0.17 | 2000 | 0.3656 | 0.3623 |
| 1.1294 | 0.21 | 2500 | 0.2843 | 0.2926 |
| 1.0731 | 0.25 | 3000 | 0.2554 | 0.2664 |
| 1.051 | 0.3 | 3500 | 0.2387 | 0.2535 |
| 1.0479 | 0.34 | 4000 | 0.2345 | 0.2512 |
| 1.0026 | 0.38 | 4500 | 0.2270 | 0.2452 |
| 0.9921 | 0.42 | 5000 | 0.2212 | 0.2353 |
| 0.9839 | 0.47 | 5500 | 0.2141 | 0.2330 |
| 0.9907 | 0.51 | 6000 | 0.2122 | 0.2334 |
| 0.9788 | 0.55 | 6500 | 0.2114 | 0.2270 |
| 0.9687 | 0.59 | 7000 | 0.2066 | 0.2323 |
| 0.9777 | 0.64 | 7500 | 0.2033 | 0.2237 |
| 0.9476 | 0.68 | 8000 | 0.2020 | 0.2194 |
| 0.9625 | 0.72 | 8500 | 0.1977 | 0.2191 |
| 0.9497 | 0.76 | 9000 | 0.1976 | 0.2175 |
| 0.9781 | 0.81 | 9500 | 0.1956 | 0.2159 |
| 0.9552 | 0.85 | 10000 | 0.1958 | 0.2191 |
| 0.9345 | 0.89 | 10500 | 0.1964 | 0.2158 |
| 0.9528 | 0.93 | 11000 | 0.1926 | 0.2154 |
| 0.9502 | 0.98 | 11500 | 0.1953 | 0.2149 |
| 0.9358 | 1.02 | 12000 | 0.1927 | 0.2167 |
| 0.941 | 1.06 | 12500 | 0.1901 | 0.2115 |
| 0.9287 | 1.1 | 13000 | 0.1936 | 0.2090 |
| 0.9491 | 1.15 | 13500 | 0.1900 | 0.2104 |
| 0.9478 | 1.19 | 14000 | 0.1931 | 0.2120 |
| 0.946 | 1.23 | 14500 | 0.1914 | 0.2134 |
| 0.9499 | 1.27 | 15000 | 0.1931 | 0.2173 |
| 0.9346 | 1.32 | 15500 | 0.1913 | 0.2105 |
| 0.9509 | 1.36 | 16000 | 0.1902 | 0.2137 |
| 0.9294 | 1.4 | 16500 | 0.1895 | 0.2086 |
| 0.9418 | 1.44 | 17000 | 0.1913 | 0.2183 |
| 0.9302 | 1.49 | 17500 | 0.1884 | 0.2114 |
| 0.9418 | 1.53 | 18000 | 0.1894 | 0.2108 |
| 0.9363 | 1.57 | 18500 | 0.1886 | 0.2132 |
| 0.9338 | 1.61 | 19000 | 0.1856 | 0.2078 |
| 0.9185 | 1.66 | 19500 | 0.1852 | 0.2056 |
| 0.9216 | 1.7 | 20000 | 0.1874 | 0.2095 |
| 0.9176 | 1.74 | 20500 | 0.1873 | 0.2078 |
| 0.9288 | 1.78 | 21000 | 0.1865 | 0.2097 |
| 0.9278 | 1.83 | 21500 | 0.1869 | 0.2100 |
| 0.9295 | 1.87 | 22000 | 0.1878 | 0.2095 |
| 0.9221 | 1.91 | 22500 | 0.1852 | 0.2121 |
| 0.924 | 1.95 | 23000 | 0.1855 | 0.2042 |
| 0.9104 | 2.0 | 23500 | 0.1858 | 0.2105 |
| 0.9284 | 2.04 | 24000 | 0.1850 | 0.2080 |
| 0.9162 | 2.08 | 24500 | 0.1839 | 0.2045 |
| 0.9111 | 2.12 | 25000 | 0.1838 | 0.2080 |
| 0.91 | 2.17 | 25500 | 0.1889 | 0.2106 |
| 0.9152 | 2.21 | 26000 | 0.1856 | 0.2026 |
| 0.9209 | 2.25 | 26500 | 0.1891 | 0.2133 |
| 0.9094 | 2.29 | 27000 | 0.1857 | 0.2089 |
| 0.9065 | 2.34 | 27500 | 0.1840 | 0.2052 |
| 0.9156 | 2.38 | 28000 | 0.1833 | 0.2062 |
| 0.8986 | 2.42 | 28500 | 0.1789 | 0.2001 |
| 0.9045 | 2.46 | 29000 | 0.1769 | 0.2022 |
| 0.9039 | 2.51 | 29500 | 0.1819 | 0.2073 |
| 0.9145 | 2.55 | 30000 | 0.1828 | 0.2063 |
| 0.9081 | 2.59 | 30500 | 0.1811 | 0.2049 |
| 0.9252 | 2.63 | 31000 | 0.1833 | 0.2086 |
| 0.8957 | 2.68 | 31500 | 0.1795 | 0.2083 |
| 0.891 | 2.72 | 32000 | 0.1809 | 0.2058 |
| 0.9023 | 2.76 | 32500 | 0.1812 | 0.2061 |
| 0.8918 | 2.8 | 33000 | 0.1775 | 0.1997 |
| 0.8852 | 2.85 | 33500 | 0.1790 | 0.1997 |
| 0.8928 | 2.89 | 34000 | 0.1767 | 0.2013 |
| 0.9079 | 2.93 | 34500 | 0.1735 | 0.1986 |
| 0.9032 | 2.97 | 35000 | 0.1793 | 0.2024 |
| 0.9018 | 3.02 | 35500 | 0.1778 | 0.2027 |
| 0.8846 | 3.06 | 36000 | 0.1776 | 0.2046 |
| 0.8848 | 3.1 | 36500 | 0.1812 | 0.2064 |
| 0.9062 | 3.14 | 37000 | 0.1800 | 0.2018 |
| 0.9011 | 3.19 | 37500 | 0.1783 | 0.2049 |
| 0.8996 | 3.23 | 38000 | 0.1810 | 0.2036 |
| 0.893 | 3.27 | 38500 | 0.1805 | 0.2056 |
| 0.897 | 3.31 | 39000 | 0.1773 | 0.2035 |
| 0.8992 | 3.36 | 39500 | 0.1804 | 0.2054 |
| 0.8987 | 3.4 | 40000 | 0.1768 | 0.2016 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-german-de --dataset mozilla-foundation/common_voice_7_0 --config de --split test --log_outputs
```
2. To evaluate on test dev data
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-german-de --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0
``` | {"language": ["de"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "de", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - German", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "de"}, "metrics": [{"type": "wer", "value": 20.16, "name": "Test WER"}, {"type": "cer", "value": 5.06, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "de"}, "metrics": [{"type": "wer", "value": 39.79, "name": "Test WER"}, {"type": "cer", "value": 15.02, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "de"}, "metrics": [{"type": "wer", "value": 47.95, "name": "Test WER"}]}]}]} | AndrewMcDowell/wav2vec2-xls-r-300m-german-de | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - JA dataset.
Kanji are converted into Hiragana using the [pykakasi](https://pykakasi.readthedocs.io/en/latest/index.html) library during training and evaluation. The model can output both Hiragana and Katakana characters. Since there is no spacing, WER is not a suitable metric for evaluating performance and CER is more suitable.
On mozilla-foundation/common_voice_8_0 it achieved:
- cer: 23.64%
On speech-recognition-community-v2/dev_data it achieved:
- cer: 30.99%
It achieves the following results on the evaluation set:
- Loss: 0.5212
- Wer: 1.3068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 48
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.0974 | 4.72 | 1000 | 4.0178 | 1.9535 |
| 2.1276 | 9.43 | 2000 | 0.9301 | 1.2128 |
| 1.7622 | 14.15 | 3000 | 0.7103 | 1.5527 |
| 1.6397 | 18.87 | 4000 | 0.6729 | 1.4269 |
| 1.5468 | 23.58 | 5000 | 0.6087 | 1.2497 |
| 1.4885 | 28.3 | 6000 | 0.5786 | 1.3222 |
| 1.451 | 33.02 | 7000 | 0.5726 | 1.3768 |
| 1.3912 | 37.74 | 8000 | 0.5518 | 1.2497 |
| 1.3617 | 42.45 | 9000 | 0.5352 | 1.2694 |
| 1.3113 | 47.17 | 10000 | 0.5228 | 1.2781 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-japanese --dataset mozilla-foundation/common_voice_8_0 --config ja --split test --log_outputs
```
2. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-japanese --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0
``` | {"language": ["ja"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "ja", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300-m", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ja"}, "metrics": [{"type": "wer", "value": 95.82, "name": "Test WER"}, {"type": "cer", "value": 23.64, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "de"}, "metrics": [{"type": "wer", "value": 100.0, "name": "Test WER"}, {"type": "cer", "value": 30.99, "name": "Test CER"}, {"type": "cer", "value": 30.37, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ja"}, "metrics": [{"type": "cer", "value": 34.42, "name": "Test CER"}]}]}]} | AndrewMcDowell/wav2vec2-xls-r-300m-japanese | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"ja",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | AndrewNLP/redditDepressionPropensityClassifiers | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Andrey1989/bert-multilingual-finetuned-ner | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Andrey1989/mbart-finetuned-en-to-kk | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-finetuned-ner
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1264
- Precision: 0.9305
- Recall: 0.9375
- F1: 0.9340
- Accuracy: 0.9700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.301 | 1.0 | 625 | 0.1756 | 0.8843 | 0.9067 | 0.8953 | 0.9500 |
| 0.1259 | 2.0 | 1250 | 0.1248 | 0.9285 | 0.9335 | 0.9310 | 0.9688 |
| 0.0895 | 3.0 | 1875 | 0.1264 | 0.9305 | 0.9375 | 0.9340 | 0.9700 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "mbert-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "lv"}, "metrics": [{"type": "precision", "value": 0.9304986338797814, "name": "Precision"}, {"type": "recall", "value": 0.9375430144528561, "name": "Recall"}, {"type": "f1", "value": 0.9340075419952005, "name": "F1"}, {"type": "accuracy", "value": 0.9699674740348558, "name": "Accuracy"}]}]}]} | Andrey1989/mbert-finetuned-ner | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Andrey1989/mbert-finetuned-ner_2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Andrey1989/mt5-small-finetuned-mlsum-es | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Andrey1989/mt5-small-finetuned-mlsum-fr | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Andrey78/my_model_nlp | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Andrey78/my_nlp_test_model | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | This model is a finetuning of bert-base-greek-uncased as a Token Classifier which predicts at each token which punctuation mark it is followed by. The model preprocesses everything to lowercase and removes all Greek diacritics. For information on pretraining of the Greek Bert model, please refer to [Greek Bert](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1)
# Finetuning Parameters
Epochs: 5
Maximum Sequence Length: 512
Learning Rate: 4e−5
Batch Size: 16
Finetuning Data:
Greek Europarl data available at: https://opus.nlpl.eu/Europarl.php
Tokens: 44.1M
Sentences: 1.6M
Punctuation Points Recognised:
'.' (0) : Full stop
',' (1) : Comma
';' (2) : Greek question mark
'-' (3) : Dash
':' (4) : Semicolon
'0' (5) : No punctuation point is following
# Load Finetuned Model
~~~
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Andrianos/bert-base-greek-punctuation-prediction-finetuned")
model = AutoModelForTokenClassification.from_pretrained("Andrianos/bert-base-greek-punctuation-prediction-finetuned")
~~~
# Using the Model
If you are interested in trying out examples and finding the limitations of the model, the starter Python code to use the model is available at [Github Repo](https://github.com/Andrian0s/Greek-Transformer-Model-Punctuation-Prediction)
# Examples of the Model
Using the demo script, we tried out a few brief examples and show the results below
Input | Input with Predictions
------------- | -------------
"προσεκτικά στον δρομο θα σε περιμενω" | "προσεκτικα στον δρομο, θα σε περιμενω"
"τι θα φας για βραδινο" | "τι θα φας για βραδινο;"
"κυριε μαυροκέφαλε εσπασε η κεραια του διαδικτυου θα παρω τηλεφωνο την cyta" | "κυριε μαυροκεφαλε, εσπασε η κεραια του διαδικτυου. θα παρω τηλεφωνο την cyta."
"κυριε μαυροκεφαλε εσπασεν η αντεννα του ιντερνετ εννα πιαω τηλεφωνον την cyta" | "κυριε μαυροκεφαλε, εσπασεν η αντεννα του ιντερνετ. εννα πιαω τηλεφωνον την cyta."
The last two examples have identical meanings, the first is written in plain Modern Greek and the latter in the Cypriot Dialect. It is interesting to see the model performs similarly, even if some words and suffixes are out of vocabulary.
# Further Performance Improvements
We would be happy to hear people have finetuned this model with more and diverse datasets, as we expect this to increase robustness.
Within our research, improvements to consistency in punctuation prediction have shown to be possible with techniques such as sliding windows (during inference) for larger documents, weighted loss and ensembling of different models. Make sure to cite our work when you further our models with the aforementioned techniques.
# Author
This model is further work based on the winning submission at Shared Task 2 Sentence End and Punctuation Prediction in NLG Text at SwissText2021.
The winning submission is entitled "UZH OnPoint at Swisstext-2021: Sentence End and Punctuation Prediction in NLG Text Through Ensembling of Different Transformers" in the Proceedings of the 6th SwissText Held Online. It is publicly available at http://ceur-ws.org/Vol-2957/sepp_paper2.pdf
If you use the model, please cite the following:
@inproceedings{ST2021-OnPoint,
title={UZH OnPoint at Swisstext-2021: Sentence End and Punctuation Prediction in NLG Text Through Ensembling of Different Transformers},
author={Michail, Andrianos and Wehrli, Silvan and Bucková, Terézia},
booktitle={Proceedings of the 1st Shared Task on Sentence
End and Punctuation Prediction in NLG Text (SEPPNLG 2021) at SwissText 2021},
year={2021}
}
Model Finetuned and released by Andrianos Michail with resources provided by [Department of Computational Linguistics, University of Zurich](https://www.cl.uzh.ch/en.html)
| Github: [@Andrian0s](https://github.com/Andrian0s) | LinkedIn: [amichail2](https://www.linkedin.com/in/amichail2/) | {} | Andrianos/bert-base-greek-punctuation-prediction-finetuned | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers | Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges.
Abbreviation|Description
-|-
O|Outside of a named entity
B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity
I-MIS | Miscellaneous entity
B-PER |Beginning of a person's name right after another person's name
B-DERIV-PER| Begginning derivative that describes relation to a person
I-PER |Person's name
B-ORG |Beginning of an organization right after another organization
I-ORG |organization
B-LOC |Beginning of a location right after another location
I-LOC |Location | {"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "datasets": ["hr500k"], "widget": [{"text": "Moje ime je Aleksandar i zivim u Beogradu pored Vlade Republike Srbije"}]} | Andrija/M-bert-NER | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"hr",
"sr",
"multilingual",
"dataset:hr500k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | from transformers import RobertaTokenizerFast
tokenizer = RobertaTokenizerFast.from_pretrained('Andrija/RobertaFastBPE', bos_token="<s>", eos_token="</s>")
encoded = tokenizer('Stručnjaci te bolnice, predvođeni dr Alisom Lim')
# {'input_ids': [0, 47541, 34632, 603, 24817, 16, 27540, 6768, 2350, 2803, 3991, 2733, 81, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
tokenizer.decode(encoded['input_ids'])
# <s>Stručnjaci te bolnice, predvođeni dr Alisom Lim</s> | {} | Andrija/RobertaFastBPE | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | # Transformer language model for Croatian and Serbian
Trained on 43GB datasets that contain Croatian and Serbian language for one epochs (9.6 mil. steps, 3 epochs).
Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr datasets
Validation number of exampels run for perplexity:1620487 sentences
Perplexity:6.02
Start loss: 8.6
Final loss: 2.0
Thoughts: Model could be trained more, the training did not stagnate.
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `Andrija/SRoBERTa-F` | 80M | Fifth | Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr (43 GB of text) | | {"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "tags": ["masked-lm"], "datasets": ["oscar", "srwac", "leipzig", "cc100", "hrwac"], "widget": [{"text": "Ovo je po\u010detak <mask>."}]} | Andrija/SRoBERTa-F | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"masked-lm",
"hr",
"sr",
"multilingual",
"dataset:oscar",
"dataset:srwac",
"dataset:leipzig",
"dataset:cc100",
"dataset:hrwac",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers | Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges.
Abbreviation|Description
-|-
O|Outside of a named entity
B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity
I-MIS | Miscellaneous entity
B-PER |Beginning of a person’s name right after another person’s name
B-DERIV-PER| Begginning derivative that describes relation to a person
I-PER |Person’s name
B-ORG |Beginning of an organization right after another organization
I-ORG |organization
B-LOC |Beginning of a location right after another location
I-LOC |Location | {"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "datasets": ["hr500k"], "widget": [{"text": "Moje ime je Aleksandar i zivim u Beogradu pored Vlade Republike Srbije"}]} | Andrija/SRoBERTa-L-NER | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"hr",
"sr",
"multilingual",
"dataset:hr500k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | # Transformer language model for Croatian and Serbian
Trained on 6GB datasets that contain Croatian and Serbian language for two epochs (500k steps).
Leipzig, OSCAR and srWac datasets
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `Andrija/SRoBERTa-L` | 80M | Third | Leipzig Corpus, OSCAR and srWac (6 GB of text) | | {"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "tags": ["masked-lm"], "datasets": ["oscar", "srwac", "leipzig"], "widget": [{"text": "Ovo je po\u010detak <mask>."}]} | Andrija/SRoBERTa-L | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"masked-lm",
"hr",
"sr",
"multilingual",
"dataset:oscar",
"dataset:srwac",
"dataset:leipzig",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers | Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges.
Abbreviation|Description
-|-
O|Outside of a named entity
B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity
I-MIS | Miscellaneous entity
B-PER |Beginning of a person’s name right after another person’s name
B-DERIV-PER| Begginning derivative that describes relation to a person
I-PER |Person’s name
B-ORG |Beginning of an organization right after another organization
I-ORG |organization
B-LOC |Beginning of a location right after another location
I-LOC |Location | {"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "datasets": ["hr500k"], "widget": [{"text": "Moje ime je Aleksandar i zivim u Beogradu pored Vlade Republike Srbije"}]} | Andrija/SRoBERTa-NER | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"hr",
"sr",
"multilingual",
"dataset:hr500k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers | {} | Andrija/SRoBERTa-NLP | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers |
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges.
Abbreviation|Description
-|-
O|Outside of a named entity
B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity
I-MIS | Miscellaneous entity
B-PER |Beginning of a person's name right after another person's name
B-DERIV-PER| Begginning derivative that describes relation to a person
I-PER |Person's name
B-ORG |Beginning of an organization right after another organization
I-ORG |organization
B-LOC |Beginning of a location right after another location
I-LOC |Location | {"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "datasets": ["hr500k"], "widget": [{"text": "Moje ime je Aleksandar i zivim u Beogradu pored Vlade Republike Srbije"}]} | Andrija/SRoBERTa-XL-NER | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"hr",
"sr",
"multilingual",
"dataset:hr500k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.