modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-22 00:45:16
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 570
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-22 00:43:28
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
infinitejoy/wav2vec2-large-xls-r-300m-romansh-sursilvan
|
infinitejoy
| 2022-03-24T11:51:18Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"rm-sursilv",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- rm-sursilv
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- rm-sursilv
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Romansh Sursilvan
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: rm-sursilv
metrics:
- name: Test WER
type: wer
value: 19.816
- name: Test CER
type: cer
value: 4.153
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-romansh-sursilvan
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - RM-SURSILV dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2163
- Wer: 0.1981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 120.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 1.1004 | 23.81 | 2000 | 0.3710 | 0.4191 |
| 0.7002 | 47.62 | 4000 | 0.2342 | 0.2562 |
| 0.5573 | 71.43 | 6000 | 0.2175 | 0.2177 |
| 0.4799 | 95.24 | 8000 | 0.2109 | 0.1987 |
| 0.4511 | 119.05 | 10000 | 0.2164 | 0.1975 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-slovenian
|
infinitejoy
| 2022-03-24T11:49:25Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"sl",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- sl
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- sl
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Slovenian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: sl
metrics:
- name: Test WER
type: wer
value: 18.97
- name: Test CER
type: cer
value: 4.534
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sl
metrics:
- name: Test WER
type: wer
value: 55.048
- name: Test CER
type: cer
value: 22.739
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: sl
metrics:
- name: Test WER
type: wer
value: 54.81
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-slovenian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2093
- Wer: 0.1907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.785 | 12.5 | 1000 | 0.7465 | 0.6812 |
| 0.8989 | 25.0 | 2000 | 0.2495 | 0.2732 |
| 0.7118 | 37.5 | 3000 | 0.2126 | 0.2284 |
| 0.6367 | 50.0 | 4000 | 0.2049 | 0.2049 |
| 0.5763 | 62.5 | 5000 | 0.2116 | 0.2055 |
| 0.5196 | 75.0 | 6000 | 0.2111 | 0.1910 |
| 0.4949 | 87.5 | 7000 | 0.2131 | 0.1931 |
| 0.4797 | 100.0 | 8000 | 0.2093 | 0.1907 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
joe5campbell/Horovod_Tweet_Sentiment_1k_3eps
|
joe5campbell
| 2022-03-24T11:48:32Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-24T11:48:21Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Horovod_Tweet_Sentiment_1k_3eps
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Horovod_Tweet_Sentiment_1k_3eps
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6961535
- Train Accuracy: 0.49375
- Validation Loss: 0.6676211
- Validation Accuracy: 0.64375
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 0.0003, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.717013 | 0.46562502 | 0.73462963 | 0.515625 | 0 |
| 0.70586157 | 0.5078125 | 0.6937375 | 0.484375 | 1 |
| 0.6961535 | 0.49375 | 0.6676211 | 0.64375 | 2 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Tokenizers 0.11.6
|
JustAdvanceTechonology/bert-fine-tuned-medical-insurance-ner
|
JustAdvanceTechonology
| 2022-03-24T11:33:03Z | 5 | 4 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-24T10:20:14Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: JustAdvanceTechonology/bert-fine-tuned-medical-insurance-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# JustAdvanceTechonology/bert-fine-tuned-medical-insurance-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0269
- Validation Loss: 0.0551
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1775 | 0.0646 | 0 |
| 0.0454 | 0.0580 | 1 |
| 0.0269 | 0.0551 | 2 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.5.0
- Datasets 1.18.3
- Tokenizers 0.11.6
|
joe5campbell/Horovod_Tweet_Sentiment_1k_5eps
|
joe5campbell
| 2022-03-24T11:01:59Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-24T11:01:49Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Horovod_Tweet_Sentiment_1k_5eps
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Horovod_Tweet_Sentiment_1k_5eps
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5216092
- Train Accuracy: 0.784375
- Validation Loss: 0.92405033
- Validation Accuracy: 0.4875
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 0.0003, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.7129049 | 0.50937504 | 0.7314203 | 0.490625 | 0 |
| 0.73165804 | 0.47343752 | 0.6929074 | 0.484375 | 1 |
| 0.6827939 | 0.55 | 0.6864271 | 0.50625 | 2 |
| 0.66076773 | 0.5578125 | 0.60817575 | 0.69687504 | 3 |
| 0.5216092 | 0.784375 | 0.92405033 | 0.4875 | 4 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Tokenizers 0.11.6
|
niksmer/RoBERTa-RILE
|
niksmer
| 2022-03-24T09:19:40Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
metrics:
- accuracy
- precision
- recall
model-index:
- name: RoBERTa-RILE
results: []
widget:
- text: "Russia must end the war."
- text: "Democratic institutions must be supported."
- text: "The state must fight political corruption."
- text: "Our energy economy must be nationalised."
- text: "We must increase social spending."
---
# RoBERTa-RILE
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on data from the [Manifesto Project](https://manifesto-project.wzb.eu/).
## Model description
This model was trained on 115,943 manually annotated sentences to classify text into one of three political categories: "neutral", "left", "right".
## Intended uses & limitations
The model output reproduces the limitations of the dataset in terms of country coverage, time span, domain definitions and potential biases of the annotators - as any supervised machine learning model would. Applying the model to other types of data (other types of texts, countries etc.) will reduce performance.
```python
from transformers import pipeline
import pandas as pd
classifier = pipeline(
task="text-classification",
model="niksmer/RoBERTa-RILE")
# Load text data you want to classify
text = pd.read_csv("example.csv")["text_you_want_to_classify"].to_list()
# Inference
output = classifier(text)
# Print output
pd.DataFrame(output).head()
```
## Training and evaluation data
## Training and evaluation data
RoBERTa-RILE was trained on the English-speaking subset of the [Manifesto Project Dataset (MPDS2021a)](https://manifesto-project.wzb.eu/datasets). The model was trained on 115,943 sentences from 163 political manifestos in 7 English-speaking countries (Australia, Canada, Ireland, New Zealand, South Africa, United Kingdom, United States). The manifestos were published between 1992 - 2020.
| Country | Count manifestos | Count sentences | Time span |
|----------------|------------------|-----------------|--------------------|
| Australia | 18 | 14,887 | 2010-2016 |
| Ireland | 23 | 24,966 | 2007-2016 |
| Canada | 14 | 12,344 | 2004-2008 & 2015 |
| New Zealand | 46 | 35,079 | 1993-2017 |
| South Africa | 29 | 13,334 | 1994-2019 |
| USA | 9 | 13,188 | 1992 & 2004-2020 |
| United Kingdom | 34 | 30,936 | 1997-2019 |
Canadian manifestos between 2004 and 2008 are used as test data.
The Manifesto Project mannually annotates individual sentences from political party manifestos in over 50 main categories - see the [codebook](https://manifesto-project.wzb.eu/down/papers/handbook_2021_version_5.pdf) for the exact definitions of each categorie. It has created a valid left-right-scale, the rile-index, to aaggregate manifesto in a standardized, onde-dimensional political space from left to right based on saliency-theory.
RoBERTa-RILE classifies texts based on the rile index.
### Tain data
Train data was slightly imbalanced.
| Label | Description | Count |
|------------|--------------|--------|
| 0 | neutral | 52,277 |
| 1 | left | 37,106 |
| 2 | right | 26,560 |
Overall count: 115,943
### Validation data
The validation was created by chance.
| Label | Description | Count |
|------------|--------------|--------|
| 0 | neutral | 9,198 |
| 1 | left | 6,637 |
| 2 | right | 4,626 |
Overall count: 20,461
### Test data
The test dataset contains ten canadian manifestos between 2004 and 2008.
| Label | Description | Count |
|------------|--------------|--------|
| 0 | neutral | 3,881 |
| 1 | left | 2,611 |
| 2 | right | 1,838 |
Overall count: 8,330
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
```
training_args = TrainingArguments(
warmup_ratio=0.05,
weight_decay=0.1,
learning_rate=1e-05,
fp16 = True,
evaluation_strategy="epoch",
num_train_epochs=5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
save_strategy="no",
logging_dir='logs',
logging_strategy= 'steps',
logging_steps=10,
push_to_hub=True,
hub_strategy="end")
```
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-micro | F1-macro | F1-weighted | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:-----------:|:---------:|:------:|
| 0.7442 | 1.0 | 1812 | 0.6827 | 0.7120 | 0.7120 | 0.7007 | 0.7126 | 0.7120 | 0.7120 |
| 0.6447 | 2.0 | 3624 | 0.6618 | 0.7281 | 0.7281 | 0.7169 | 0.7281 | 0.7281 | 0.7281 |
| 0.5467 | 3.0 | 5436 | 0.6657 | 0.7309 | 0.7309 | 0.7176 | 0.7295 | 0.7309 | 0.7309 |
| 0.5179 | 4.0 | 7248 | 0.6654 | 0.7346 | 0.7346 | 0.7240 | 0.7345 | 0.7346 | 0.7346 |
| 0.4787 | 5.0 | 9060 | 0.6757 | 0.7350 | 0.7350 | 0.7241 | 0.7347 | 0.7350 | 0.7350 |
### Validation evaluation
| Model | Micro F1-Score | Macro F1-Score | Weighted F1-Score |
|----------------|----------------|----------------|-------------------|
| RoBERTa-RILE | 0.74 | 0.72 | 0.73 |
### Test evaluation
| Model | Micro F1-Score | Macro F1-Score | Weighted F1-Score |
|----------------|----------------|----------------|-------------------|
| RoBERTa-RILE | 0.69 | 0.67 | 0.69 |
### Evaluation per category
| Label | Validation F1-Score | Test F1-Score |
|-----------------------------|---------------------|---------------|
| neutral | 0.77 | 0.74 |
| left | 0.73 | 0.65 |
| right | 0.67 | 0.62 |
### Evaluation based on saliency theory
Saliency theory is a theory to analyse politial text data. In sum, parties tend to write about policies in which they think that they are seen as competent.
Voters tend to assign advantages in policy competence in line to the assumed ideology of parties. Therefore you can analyze the share of policies parties tend to write about in their manifestos to analyze the party ideology.
The Manifesto Project presented for such an analysis the rile-index. For a quick overview, check [this](https://manifesto-project.wzb.eu/down/tutorials/main-dataset.html#measuring-parties-left-right-positions).
In the following plot, the predicted and original rile-indices are shown per manifesto in the test dataset. Overall the pearson correlation between the predicted and original rile-indices is 0.95. As alternative, you can use [ManiBERT](https://huggingface.co/niksmer/ManiBERT).

### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.8.0
- Tokenizers 0.10.3
|
buvnswrn/daml-t5-pretrain
|
buvnswrn
| 2022-03-24T09:08:34Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-24T07:11:08Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- imdb
model-index:
- name: daml-t5-pretrain-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# daml-t5-pretrain-imdb
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
niksmer/ManiBERT
|
niksmer
| 2022-03-24T09:03:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
metrics:
- accuracy
- precision
- recall
model-index:
- name: ManiBERT
results: []
widget:
- text: "Russia must end the war."
- text: "Democratic institutions must be supported."
- text: "The state must fight political corruption."
- text: "Our energy economy must be nationalised."
- text: "We must increase social spending."
---
# ManiBERT
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on data from the [Manifesto Project](https://manifesto-project.wzb.eu/).
## Model description
This model was trained on 115,943 manually annotated sentences to classify text into one of 56 political categories:
## Intended uses & limitations
The model output reproduces the limitations of the dataset in terms of country coverage, time span, domain definitions and potential biases of the annotators - as any supervised machine learning model would. Applying the model to other types of data (other types of texts, countries etc.) will reduce performance.
```python
from transformers import pipeline
import pandas as pd
classifier = pipeline(
task="text-classification",
model="niksmer/ManiBERT")
# Load text data you want to classify
text = pd.read_csv("example.csv")["text_you_want_to_classify"].to_list()
# Inference
output = classifier(text)
# Print output
pd.DataFrame(output).head()
```
## Train Data
ManiBERT was trained on the English-speaking subset of the [Manifesto Project Dataset (MPDS2021a)](https://manifesto-project.wzb.eu/datasets). The model was trained on 115,943 sentences from 163 political manifestos in 7 English-speaking countries (Australia, Canada, Ireland, New Zealand, South Africa, United Kingdom, United States). The manifestos were published between 1992 - 2020.
| Country | Count manifestos | Count sentences | Time span |
|----------------|------------------|-----------------|--------------------|
| Australia | 18 | 14,887 | 2010-2016 |
| Ireland | 23 | 24,966 | 2007-2016 |
| Canada | 14 | 12,344 | 2004-2008 & 2015 |
| New Zealand | 46 | 35,079 | 1993-2017 |
| South Africa | 29 | 13,334 | 1994-2019 |
| USA | 9 | 13,188 | 1992 & 2004-2020 |
| United Kingdom | 34 | 30,936 | 1997-2019 |
Canadian manifestos between 2004 and 2008 are used as test data.
The resulting Datasets are higly (!) imbalanced. See Evaluation.
## Evaluation
| Description | Label | Count Train Data | Count Validation Data | Count Test Data | Validation F1-Score | Test F1-Score |
|-------------------------------------------------------------------|-------|------------------|-----------------------|-----------------|---------------------|---------------|
| Foreign Special Relationships: Positive | 0 | 545 | 96 | 60 | 0.43 | 0.45 |
| Foreign Special Relationships: Negative | 1 | 66 | 14 | 22 | 0.22 | 0.09 |
| Anti-Imperialism | 2 | 93 | 16 | 1 | 0.16 | 0.00 |
| Military: Positive | 3 | 1,969 | 356 | 159 | 0.69 | 0.63 |
| Military: Negative | 4 | 489 | 89 | 52 | 0.59 | 0.63 |
| Peace | 5 | 418 | 80 | 49 | 0.57 | 0.64 |
| Internationalism: Positive | 6 | 2,401 | 417 | 404 | 0.60 | 0.54 |
| European Community/Union or Latin America Integration: Positive | 7 | 930 | 156 | 20 | 0.58 | 0.32 |
| Internationalism: Negative | 8 | 209 | 40 | 57 | 0.28 | 0.05 |
| European Community/Union or Latin America Integration: Negative | 9 | 520 | 81 | 0 | 0.39 | - |
| Freedom and Human Rights | 10 | 2,196 | 389 | 76 | 0.50 | 0.34 |
| Democracy | 11 | 3,045 | 534 | 206 | 0.53 | 0.51 |
| Constitutionalism: Positive | 12 | 259 | 48 | 12 | 0.34 | 0.22 |
| Constitutionalism: Negative | 13 | 380 | 72 | 2 | 0.34 | 0.00 |
| Decentralisation: Positive | 14 | 2,791 | 481 | 331 | 0.49 | 0.45 |
| Centralisation: Positive | 15 | 150 | 33 | 71 | 0.11 | 0.00 |
| Governmental and Administrative Efficiency | 16 | 3,905 | 711 | 105 | 0.50 | 0.32 |
| Political Corruption | 17 | 900 | 186 | 234 | 0.59 | 0.55 |
| Political Authority | 18 | 3,488 | 627 | 300 | 0.51 | 0.39 |
| Free Market Economy | 19 | 1,768 | 309 | 53 | 0.40 | 0.16 |
| Incentives: Positive | 20 | 3,100 | 544 | 81 | 0.52 | 0.28 |
| Market Regulation | 21 | 3,562 | 616 | 210 | 0.50 | 0.36 |
| Economic Planning | 22 | 533 | 93 | 67 | 0.31 | 0.12 |
| Corporatism/ Mixed Economy | 23 | 193 | 32 | 23 | 0.28 | 0.33 |
| Protectionism: Positive | 24 | 633 | 103 | 180 | 0.44 | 0.22 |
| Protectionism: Negative | 25 | 723 | 118 | 149 | 0.52 | 0.40 |
| Economic Goals | 26 | 817 | 139 | 148 | 0.05 | 0.00 |
| Keynesian Demand Management | 27 | 160 | 25 | 9 | 0.00 | 0.00 |
| Economic Growth: Positive | 28 | 3,142 | 607 | 374 | 0.53 | 0.30 |
| Technology and Infrastructure: Positive | 29 | 8,643 | 1,529 | 339 | 0.71 | 0.56 |
| Controlled Economy | 30 | 567 | 96 | 94 | 0.47 | 0.16 |
| Nationalisation | 31 | 832 | 157 | 27 | 0.56 | 0.16 |
| Economic Orthodoxy | 32 | 1,721 | 287 | 184 | 0.55 | 0.48 |
| Marxist Analysis: Positive | 33 | 148 | 33 | 0 | 0.20 | - |
| Anti-Growth Economy and Sustainability | 34 | 2,676 | 452 | 250 | 0.43 | 0.33 |
| Environmental Protection | 35 | 6,731 | 1,163 | 934 | 0.70 | 0.67 |
| Culture: Positive | 36 | 2,082 | 358 | 92 | 0.69 | 0.56 |
| Equality: Positive | 37 | 6,630 | 1,126 | 361 | 0.57 | 0.43 |
| Welfare State Expansion | 38 | 13,486 | 2,405 | 990 | 0.72 | 0.61 |
| Welfare State Limitation | 39 | 926 | 151 | 2 | 0.45 | 0.00 |
| Education Expansion | 40 | 7,191 | 1,324 | 274 | 0.78 | 0.63 |
| Education Limitation | 41 | 154 | 27 | 1 | 0.17 | 0.00 |
| National Way of Life: Positive | 42 | 2,105 | 385 | 395 | 0.48 | 0.34 |
| National Way of Life: Negative | 43 | 743 | 147 | 2 | 0.27 | 0.00 |
| Traditional Morality: Positive | 44 | 1,375 | 234 | 19 | 0.55 | 0.14 |
| Traditional Morality: Negative | 45 | 291 | 54 | 38 | 0.30 | 0.23 |
| Law and Order | 46 | 5,582 | 949 | 381 | 0.72 | 0.71 |
| Civic Mindedness: Positive | 47 | 1,348 | 229 | 27 | 0.45 | 0.28 |
| Multiculturalism: Positive | 48 | 2,006 | 355 | 71 | 0.61 | 0.35 |
| Multiculturalism: Negative | 49 | 144 | 31 | 7 | 0.33 | 0.00 |
| Labour Groups: Positive | 50 | 3,856 | 707 | 57 | 0.64 | 0.14 |
| Labour Groups: Negative | 51 | 208 | 35 | 0 | 0.44 | - |
| Agriculture and Farmers | 52 | 2,996 | 490 | 130 | 0.67 | 0.56 |
| Middle Class and Professional Groups | 53 | 271 | 38 | 12 | 0.38 | 0.40 |
| Underprivileged Minority Groups | 54 | 1,417 | 252 | 82 | 0.34 | 0.33 |
| Non-economic Demographic Groups | 55 | 2,429 | 435 | 106 | 0.42 | 0.24 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
```
training_args = TrainingArguments(
warmup_ratio=0.05,
weight_decay=0.1,
learning_rate=5e-05,
fp16 = True,
evaluation_strategy="epoch",
num_train_epochs=5,
per_device_train_batch_size=16,
overwrite_output_dir=True,
per_device_eval_batch_size=16,
save_strategy="no",
logging_dir='logs',
logging_strategy= 'steps',
logging_steps=10,
push_to_hub=True,
hub_strategy="end")
```
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-micro | F1-macro | F1-weighted | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:-----------:|:---------:|:------:|
| 1.7638 | 1.0 | 1812 | 1.6471 | 0.5531 | 0.5531 | 0.3354 | 0.5368 | 0.5531 | 0.5531 |
| 1.4501 | 2.0 | 3624 | 1.5167 | 0.5807 | 0.5807 | 0.3921 | 0.5655 | 0.5807 | 0.5807 |
| 1.0638 | 3.0 | 5436 | 1.5017 | 0.5893 | 0.5893 | 0.4240 | 0.5789 | 0.5893 | 0.5893 |
| 0.9263 | 4.0 | 7248 | 1.5173 | 0.5975 | 0.5975 | 0.4499 | 0.5901 | 0.5975 | 0.5975 |
| 0.7859 | 5.0 | 9060 | 1.5574 | 0.5978 | 0.5978 | 0.4564 | 0.5903 | 0.5978 | 0.5978 |
### Overall evaluation
| Type | Micro F1-Score | Macro F1-Score | Weighted F1-Score |
|----------------|----------------|----------------|-------------------|
| Validation | 0.60 | 0.46 | 0.59 |
| Test | 0.48 | 0.30 | 0.47 |
### Evaluation based on saliency theory
Saliency theory is a theory to analyse politial text data. In sum, parties tend to write about policies in which they think that they are seen as competent.
Voters tend to assign advantages in policy competence in line to the assumed ideology of parties. Therefore you can analyze the share of policies parties tend to write about in their manifestos to analyze the party ideology.
The Manifesto Project presented for such an analysis the rile-index. For a quick overview, check [this](https://manifesto-project.wzb.eu/down/tutorials/main-dataset.html#measuring-parties-left-right-positions).
In the following plot, the predicted and original rile-indices are shown per manifesto in the test dataset. Overall the pearson correlation between the predicted and original rile-indices is 0.95. As alternative, you can use [RoBERTa-RILE](https://huggingface.co/niksmer/RoBERTa-RILE).

### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.8.0
- Tokenizers 0.10.3
|
tartuNLP/liv4ever-hugging-mt
|
tartuNLP
| 2022-03-24T07:33:01Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"fsmt",
"text2text-generation",
"translation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-24T01:38:25Z |
---
license: apache-2.0
tags:
- translation
widget:
- text: "<2li> Let us generate some Livonian text!"
---
|
nguyenvulebinh/iwslt-asr-wav2vec-large-4500h
|
nguyenvulebinh
| 2022-03-24T07:12:52Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"en",
"dataset:common_voice",
"dataset:librispeech_asr",
"dataset:how2",
"dataset:must-c-v1",
"dataset:must-c-v2",
"dataset:europarl",
"dataset:tedlium",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-23T14:53:55Z |
---
language: en
datasets:
- common_voice
- librispeech_asr
- how2
- must-c-v1
- must-c-v2
- europarl
- tedlium
tags:
- audio
- automatic-speech-recognition
license: cc-by-nc-4.0
---
# Fine-Tune Wav2Vec2 large model for English ASR
### Data for fine-tune
| Dataset | Duration in hours |
|--------------|-------------------|
| Common Voice | 1667 |
| Europarl | 85 |
| How2 | 356 |
| Librispeech | 936 |
| MuST-C v1 | 407 |
| MuST-C v2 | 482 |
| Tedlium | 482 |
### Evaluation result
| Dataset | Duration in hours | WER w/o LM | WER with LM |
|-------------|-------------------|------------|-------------|
| Librispeech | 5.4 | 2.9 | 1.1 |
| Tedlium | 2.6 | 7.9 | 5.4 |
### Usage
[](https://colab.research.google.com/drive/1FAhtGvjRdHT4W0KeMdMMlL7sm6Hbe7dv?usp=sharing)
```python
from transformers.file_utils import cached_path, hf_bucket_url
from importlib.machinery import SourceFileLoader
from transformers import Wav2Vec2ProcessorWithLM
from IPython.lib.display import Audio
import torchaudio
import torch
# Load model & processor
model_name = "nguyenvulebinh/iwslt-asr-wav2vec-large-4500h"
model = SourceFileLoader("model", cached_path(hf_bucket_url(model_name,filename="model_handling.py"))).load_module().Wav2Vec2ForCTC.from_pretrained(model_name)
processor = Wav2Vec2ProcessorWithLM.from_pretrained(model_name)
# Load an example audio (16k)
audio, sample_rate = torchaudio.load(cached_path(hf_bucket_url(model_name, filename="tst_2010_sample.wav")))
input_data = processor.feature_extractor(audio[0], sampling_rate=16000, return_tensors='pt')
# Infer
output = model(**input_data)
# Output transcript without LM
print(processor.tokenizer.decode(output.logits.argmax(dim=-1)[0].detach().cpu().numpy()))
# and of course there's teams that have a lot more tada structures and among the best are recent graduates of kindergarten
# Output transcript with LM
print(processor.decode(output.logits.cpu().detach().numpy()[0], beam_width=100).text)
# and of course there are teams that have a lot more ta da structures and among the best are recent graduates of kindergarten
```
### Model Parameters License
The ASR model parameters are made available for non-commercial use only, under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. You can find details at: https://creativecommons.org/licenses/by-nc/4.0/legalcode
### Contact
[email protected]
[](https://twitter.com/intent/follow?screen_name=nguyenvulebinh)
|
libalabala/mt5-small-finetuned-amazon-en-es
|
libalabala
| 2022-03-24T07:00:11Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-17T08:45:00Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1997
- Rouge1: 16.7312
- Rouge2: 8.6607
- Rougel: 16.1846
- Rougelsum: 16.2411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 7.0772 | 1.0 | 1209 | 3.3307 | 12.4644 | 4.0353 | 12.0167 | 12.0722 |
| 4.0223 | 2.0 | 2418 | 3.2257 | 15.338 | 7.0168 | 14.7769 | 14.8391 |
| 3.8018 | 3.0 | 3627 | 3.1997 | 16.7312 | 8.6607 | 16.1846 | 16.2411 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
simonnedved/codet5-base
|
simonnedved
| 2022-03-24T06:57:59Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dis2py",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-23T22:11:24Z |
---
license: apache-2.0
tags:
- dis2py
- generated_from_trainer
model-index:
- name: codet5-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codet5-base
This model is a fine-tuned version of [Salesforce/codet5-base](https://huggingface.co/Salesforce/codet5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Pavithra/codeparrot-ds-sample
|
Pavithra
| 2022-03-24T06:41:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T05:12:32Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds-sample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-sample
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.5219
- eval_runtime: 603.3856
- eval_samples_per_second: 154.402
- eval_steps_per_second: 4.826
- epoch: 0.15
- step: 10000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
quincyqiang/chinese-roberta-wwm-ext
|
quincyqiang
| 2022-03-24T04:58:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-24T04:52:35Z |
---
license: apache-2.0
---
|
Yaxin/xlm-roberta-base-yelp-mlm
|
Yaxin
| 2022-03-24T04:44:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-24T04:10:58Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-yelp-mlm
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: yelp_review_full yelp_review_full
type: yelp_review_full
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.7356223359340127
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-yelp-mlm
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the yelp_review_full yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1743
- Accuracy: 0.7356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
FuriouslyAsleep/unhappyZebra100
|
FuriouslyAsleep
| 2022-03-24T04:39:04Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"en",
"dataset:FuriouslyAsleep/autotrain-data-techDataClassifeier",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-24T04:38:22Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- FuriouslyAsleep/autotrain-data-techDataClassifeier
co2_eq_emissions: 0.6969569001670619
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 664919631
- CO2 Emissions (in grams): 0.6969569001670619
## Validation Metrics
- Loss: 0.022509008646011353
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- AUC: 1.0
- F1: 1.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/FuriouslyAsleep/autotrain-techDataClassifeier-664919631
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("FuriouslyAsleep/autotrain-techDataClassifeier-664919631", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("FuriouslyAsleep/autotrain-techDataClassifeier-664919631", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
rurupang/roberta-base-finetuned-sts
|
rurupang
| 2022-03-24T01:54:26Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-22T14:13:32Z |
---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- pearsonr
model-index:
- name: roberta-base-finetuned-sts
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: sts
metrics:
- name: Pearsonr
type: pearsonr
value: 0.956039443806831
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-sts
This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1999
- Pearsonr: 0.9560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearsonr |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 329 | 0.2462 | 0.9478 |
| 1.2505 | 2.0 | 658 | 0.1671 | 0.9530 |
| 1.2505 | 3.0 | 987 | 0.1890 | 0.9525 |
| 0.133 | 4.0 | 1316 | 0.2360 | 0.9548 |
| 0.0886 | 5.0 | 1645 | 0.2265 | 0.9528 |
| 0.0886 | 6.0 | 1974 | 0.2097 | 0.9518 |
| 0.0687 | 7.0 | 2303 | 0.2281 | 0.9523 |
| 0.0539 | 8.0 | 2632 | 0.2212 | 0.9542 |
| 0.0539 | 9.0 | 2961 | 0.1843 | 0.9532 |
| 0.045 | 10.0 | 3290 | 0.1999 | 0.9560 |
| 0.0378 | 11.0 | 3619 | 0.2357 | 0.9533 |
| 0.0378 | 12.0 | 3948 | 0.2134 | 0.9541 |
| 0.033 | 13.0 | 4277 | 0.2273 | 0.9540 |
| 0.03 | 14.0 | 4606 | 0.2148 | 0.9533 |
| 0.03 | 15.0 | 4935 | 0.2207 | 0.9534 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
negfir/distilbert-base-uncased-finetuned-squad
|
negfir
| 2022-03-24T01:39:12Z | 40 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2789 | 1.0 | 5533 | 1.2200 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
huggingtweets/btohtoh
|
huggingtweets
| 2022-03-24T01:35:56Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-24T01:35:48Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1506402743296020484/X79Yfcx5_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">BToh</div>
<div style="text-align: center; font-size: 14px;">@btohtoh</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from BToh.
| Data | BToh |
| --- | --- |
| Tweets downloaded | 3241 |
| Retweets | 347 |
| Short tweets | 480 |
| Tweets kept | 2414 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1xnk5832/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @btohtoh's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2gdcu3k6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2gdcu3k6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/btohtoh')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
negfir/distilbert-base-uncased-finetuned-cola
|
negfir
| 2022-03-24T00:39:00Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-15T15:29:20Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: negfir/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# negfir/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [negfir/uncased_L-12_H-128_A-2](https://huggingface.co/negfir/uncased_L-12_H-128_A-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6077
- Validation Loss: 0.6185
- Train Matthews Correlation: 0.0
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2670, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.6116 | 0.6187 | 0.0 | 0 |
| 0.6070 | 0.6190 | 0.0 | 1 |
| 0.6077 | 0.6185 | 0.0 | 2 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
espnet/russian_commonvoice_blstm
|
espnet
| 2022-03-24T00:02:17Z | 3 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"ru",
"dataset:commonvoice",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-23T23:59:42Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: ru
datasets:
- commonvoice
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/russian_commonvoice_blstm`
This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout fa1b865352475b744c37f70440de1cc6b257ba70
pip install -e .
cd egs2/commonvoice/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/russian_commonvoice_blstm
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Mar 23 19:56:59 EDT 2022`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `fa1b865352475b744c37f70440de1cc6b257ba70`
- Commit date: `Wed Feb 16 16:42:36 2022 -0500`
## asr_blstm_specaug_num_time_mask_2_lr_0.1
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_ru|7307|71189|79.3|18.4|2.4|2.1|22.8|71.1|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_ru|7307|537025|95.0|3.0|2.0|1.1|6.1|71.1|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_ru|7307|399162|93.2|4.5|2.3|1.4|8.2|71.1|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_rnn.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_blstm_specaug_num_time_mask_2_lr_0.1
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 15
patience: 3
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
- - valid
- loss
- min
- - train
- acc
- max
- - valid
- acc
- max
keep_nbest_models:
- 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 30
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_ru_bpe150_sp/train/speech_shape
- exp/asr_stats_raw_ru_bpe150_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_ru_bpe150_sp/valid/speech_shape
- exp/asr_stats_raw_ru_bpe150_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_ru_sp/wav.scp
- speech
- sound
- - dump/raw/train_ru_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_ru/wav.scp
- speech
- sound
- - dump/raw/dev_ru/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adadelta
optim_conf:
lr: 0.1
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ▁
- е
- о
- и
- с
- м
- а
- в
- н
- д
- т
- у
- .
- я
- ы
- л
- й
- з
- п
- к
- но
- ','
- ▁в
- ра
- б
- ж
- ю
- г
- го
- ▁по
- ▁с
- ни
- ч
- х
- р
- ко
- ре
- ш
- ли
- ть
- ▁на
- ль
- ва
- ер
- ▁и
- ет
- ст
- ро
- на
- ла
- ле
- ь
- ен
- то
- ло
- да
- ка
- ▁не
- ств
- ти
- ци
- ся
- ▁за
- ▁про
- че
- ем
- ру
- же
- та
- ▁при
- ▁со
- ▁это
- ри
- ф
- ки
- бо
- ц
- ▁С
- ста
- ения
- щ
- сти
- э
- К
- О
- А
- И
- '-'
- Т
- Я
- Б
- Д
- М
- '?'
- –
- Г
- —
- '!'
- У
- ъ
- '"'
- »
- ё
- Ф
- ':'
- Х
- Ю
- F
- ;
- O
- I
- E
- R
- −
- В
- С
- ''''
- П
- C
- L
- A
- ‐
- H
- T
- G
- S
- (
- )
- B
- K
- P
- Z
- M
- Й
- X
- Ц
- Ж
- Ч
- Ш
- «
- З
- Л
- Е
- Р
- Э
- N
- Н
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.5
use_preprocessor: true
token_type: bpe
bpemodel: data/ru_token_list/bpe_unigram150/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_ru_bpe150_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: vgg_rnn
encoder_conf:
rnn_type: lstm
bidirectional: true
use_projection: true
num_layers: 4
hidden_size: 1024
output_size: 1024
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf:
num_layers: 2
hidden_size: 1024
sampling_probability: 0
att_conf:
atype: location
adim: 1024
aconv_chans: 10
aconv_filts: 100
required:
- output_dir
- token_list
version: 0.10.6a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
public-data/dlib_face_landmark_model
|
public-data
| 2022-03-23T22:54:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-23T22:52:02Z |
# dlib face landmark model
- http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
|
ydshieh/roberta-base-squad2
|
ydshieh
| 2022-03-23T22:39:25Z | 57 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-23T22:29:51Z |
---
language: en
datasets:
- squad_v2
license: cc-by-4.0
---
# roberta-base for QA
NOTE: This is version 2 of the model. See [this github issue](https://github.com/deepset-ai/FARM/issues/552) from the FARM repository for an explanation of why we updated. If you'd like to use version 1, specify `revision="v1.0"` when loading the model in Transformers 3.5. For exmaple:
```
model_name = "deepset/roberta-base-squad2"
pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="question-answering")
```
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [example](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) in [FARM](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 96
n_epochs = 2
base_LM_model = "roberta-base"
max_seq_len = 386
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Using a distilled model instead
Please note that we have also released a distilled version of this model called [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2). The distilled model has a comparable prediction quality and runs at twice the speed of the base model.
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 79.87029394424324,
"f1": 82.91251169582613,
"total": 11873,
"HasAns_exact": 77.93522267206478,
"HasAns_f1": 84.02838248389763,
"HasAns_total": 5928,
"NoAns_exact": 81.79983179142137,
"NoAns_f1": 81.79983179142137,
"NoAns_total": 5945
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/roberta-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
### In FARM
```python
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import Inferencer
model_name = "deepset/roberta-base-squad2"
# a) Get predictions
nlp = Inferencer.load(model_name, task_type="question_answering")
QA_input = [{"questions": ["Why is model conversion important?"],
"text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True)
# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
```
### In haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
# or
reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
```
## Authors
Branden Chan: `branden.chan [at] deepset.ai`
Timo Möller: `timo.moeller [at] deepset.ai`
Malte Pietsch: `malte.pietsch [at] deepset.ai`
Tanay Soni: `tanay.soni [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
radev/xlm-roberta-base-finetuned-panx-de
|
radev
| 2022-03-23T22:27:27Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-16T22:11:53Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8593216480764853
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1345
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 263 | 0.1807 | 0.8065 |
| 0.2218 | 2.0 | 526 | 0.1365 | 0.8485 |
| 0.2218 | 3.0 | 789 | 0.1345 | 0.8593 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ydshieh/roberta-large-ner-english
|
ydshieh
| 2022-03-23T22:24:57Z | 36 | 2 |
transformers
|
[
"transformers",
"tf",
"roberta",
"token-classification",
"en",
"dataset:conll2003",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-23T22:13:16Z |
---
language: en
datasets:
- conll2003
widget:
- text: "My name is jean-baptiste and I live in montreal"
- text: "My name is clara and I live in berkeley, california."
- text: "My name is wolfgang and I live in berlin"
---
# roberta-large-ner-english: model fine-tuned from roberta-large for NER task
## Introduction
[roberta-large-ner-english] is an english NER model that was fine-tuned from roberta-large on conll2003 dataset.
Model was validated on emails/chat data and outperformed other models on this type of data specifically.
In particular the model seems to work better on entity that don't start with an upper case.
## Training data
Training data was classified as follow:
Abbreviation|Description
-|-
O |Outside of a named entity
MISC |Miscellaneous entity
PER |Person’s name
ORG |Organization
LOC |Location
In order to simplify, the prefix B- or I- from original conll2003 was removed.
I used the train and test dataset from original conll2003 for training and the "validation" dataset for validation. This resulted in a dataset of size:
Train | Validation
-|-
17494 | 3250
## How to use camembert-ner with HuggingFace
##### Load camembert-ner and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/roberta-large-ner-english")
model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/roberta-large-ner-english")
##### Process text sample (from wikipedia)
from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
nlp("Apple was founded in 1976 by Steve Jobs, Steve Wozniak and Ronald Wayne to develop and sell Wozniak's Apple I personal computer")
[{'entity_group': 'ORG',
'score': 0.99381506,
'word': ' Apple',
'start': 0,
'end': 5},
{'entity_group': 'PER',
'score': 0.99970853,
'word': ' Steve Jobs',
'start': 29,
'end': 39},
{'entity_group': 'PER',
'score': 0.99981767,
'word': ' Steve Wozniak',
'start': 41,
'end': 54},
{'entity_group': 'PER',
'score': 0.99956465,
'word': ' Ronald Wayne',
'start': 59,
'end': 71},
{'entity_group': 'PER',
'score': 0.9997918,
'word': ' Wozniak',
'start': 92,
'end': 99},
{'entity_group': 'MISC',
'score': 0.99956393,
'word': ' Apple I',
'start': 102,
'end': 109}]
```
## Model performances
Model performances computed on conll2003 validation dataset (computed on the tokens predictions)
entity|precision|recall|f1
-|-|-|-
PER|0.9914|0.9927|0.9920
ORG|0.9627|0.9661|0.9644
LOC|0.9795|0.9862|0.9828
MISC|0.9292|0.9262|0.9277
Overall|0.9740|0.9766|0.9753
On private dataset (email, chat, informal discussion), computed on word predictions:
entity|precision|recall|f1
-|-|-|-
PER|0.8823|0.9116|0.8967
ORG|0.7694|0.7292|0.7487
LOC|0.8619|0.7768|0.8171
By comparison on the same private dataset, Spacy (en_core_web_trf-3.2.0) was giving:
entity|precision|recall|f1
-|-|-|-
PER|0.9146|0.8287|0.8695
ORG|0.7655|0.6437|0.6993
LOC|0.8727|0.6180|0.7236
|
bigmorning/my-gpt-model-5
|
bigmorning
| 2022-03-23T22:11:47Z | 5 | 1 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T22:04:49Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: my-gpt-model-5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-gpt-model-5
This model is a fine-tuned version of [bigmorning/my-gpt-model-3](https://huggingface.co/bigmorning/my-gpt-model-3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.9979
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 4.9979 | 0 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/radagasttbrown
|
huggingtweets
| 2022-03-23T21:33:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T21:13:19Z |
---
language: en
thumbnail: http://www.huggingtweets.com/radagasttbrown/1648071147429/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1362404255798280192/yIKMf5AN_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Radagast 🌋</div>
<div style="text-align: center; font-size: 14px;">@radagasttbrown</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Radagast 🌋.
| Data | Radagast 🌋 |
| --- | --- |
| Tweets downloaded | 3228 |
| Retweets | 457 |
| Short tweets | 230 |
| Tweets kept | 2541 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1b1t67ko/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @radagasttbrown's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/boipgvkp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/boipgvkp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/radagasttbrown')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
bigmorning/my-gpt-model-4
|
bigmorning
| 2022-03-23T20:00:04Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T19:52:49Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: my-gpt-model-4
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-gpt-model-4
This model is a fine-tuned version of [bigmorning/my-gpt-model-3](https://huggingface.co/bigmorning/my-gpt-model-3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.0556
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 5.0556 | 0 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Zarkit/bert-base-multilingual-uncased-sentiment1
|
Zarkit
| 2022-03-23T19:50:26Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-23T18:58:36Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Zarkit/bert-base-multilingual-uncased-sentiment1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Zarkit/bert-base-multilingual-uncased-sentiment1
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4891
- Validation Loss: 0.5448
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7980, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.6166 | 0.5680 | 0 |
| 0.4891 | 0.5448 | 1 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
kj141/distilbert-base-uncased-finetuned-squad
|
kj141
| 2022-03-23T19:48:03Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-08T22:43:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
BigSalmon/MASKGPT2
|
BigSalmon
| 2022-03-23T19:26:53Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T19:20:45Z |
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
|
gayanin/bart-med-term-conditional-masking
|
gayanin
| 2022-03-23T19:06:03Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-23T14:24:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-med-term-conditional-masking
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-med-term-conditional-masking
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5115
- Rouge2 Precision: 0.7409
- Rouge2 Recall: 0.5343
- Rouge2 Fmeasure: 0.6025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.6278 | 1.0 | 15827 | 0.5546 | 0.7255 | 0.5244 | 0.5908 |
| 0.5356 | 2.0 | 31654 | 0.5286 | 0.7333 | 0.5293 | 0.5966 |
| 0.4757 | 3.0 | 47481 | 0.5154 | 0.7376 | 0.532 | 0.5998 |
| 0.4337 | 4.0 | 63308 | 0.5107 | 0.7406 | 0.5342 | 0.6023 |
| 0.4045 | 5.0 | 79135 | 0.5115 | 0.7409 | 0.5343 | 0.6025 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ScandinavianMrT/gpt2_ONION_prefinetune_4.0
|
ScandinavianMrT
| 2022-03-23T18:39:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T18:34:47Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2_ONION_prefinetune_4.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_ONION_prefinetune_4.0
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 153 | 4.7368 |
| No log | 2.0 | 306 | 4.6732 |
| No log | 3.0 | 459 | 4.6527 |
| 4.8529 | 4.0 | 612 | 4.6484 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
DrishtiSharma/wav2vec2-xls-r-300m-rm-sursilv-d11
|
DrishtiSharma
| 2022-03-23T18:35:27Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- rm-sursilv
license: apache-2.0
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
model-index:
- name: wav2vec2-xls-r-300m-rm-sursilv-d11
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice 8
args: rm-sursilv
metrics:
- type: wer
value: 0.24094169578811844
name: Test WER
- name: Test CER
type: cer
value: 0.049832791672554284
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: rm-sursilv
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - RM-SURSILV dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2511
- Wer: 0.2415
#### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-rm-sursilv-d11 --dataset mozilla-foundation/common_voice_8_0 --config rm-sursilv --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Romansh-Sursilv language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 125.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 2.3958 | 17.44 | 1500 | 0.6808 | 0.6521 |
| 0.9663 | 34.88 | 3000 | 0.3023 | 0.3718 |
| 0.7963 | 52.33 | 4500 | 0.2588 | 0.3046 |
| 0.6893 | 69.77 | 6000 | 0.2436 | 0.2718 |
| 0.6148 | 87.21 | 7500 | 0.2521 | 0.2572 |
| 0.5556 | 104.65 | 9000 | 0.2490 | 0.2442 |
| 0.5258 | 122.09 | 10500 | 0.2515 | 0.2442 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2
|
DrishtiSharma
| 2022-03-23T18:35:22Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"sl",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- sl
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- sl
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-sl-with-LM-v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: sl
metrics:
- name: Test WER
type: wer
value: 0.21695212999560826
- name: Test CER
type: cer
value: 0.052850080572474256
- name: Test WER (+LM)
type: wer
value: 0.14551310203484116
- name: Test CER (+LM)
type: cer
value: 0.03927566711277415
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sl
metrics:
- name: Dev WER
type: wer
value: 0.560722380639029
- name: Dev CER
type: cer
value: 0.2279626093074681
- name: Dev WER (+LM)
type: wer
value: 0.46486802661402354
- name: Dev CER (+LM)
type: cer
value: 0.21105136194592422
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: sl
metrics:
- name: Test WER
type: wer
value: 46.69
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2855
- Wer: 0.2401
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 --dataset speech-recognition-community-v2/dev_data --config sl --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.9294 | 6.1 | 500 | 2.9712 | 1.0 |
| 2.8305 | 12.2 | 1000 | 1.7073 | 0.9479 |
| 1.4795 | 18.29 | 1500 | 0.5756 | 0.6397 |
| 1.3433 | 24.39 | 2000 | 0.4968 | 0.5424 |
| 1.1766 | 30.49 | 2500 | 0.4185 | 0.4743 |
| 1.0017 | 36.59 | 3000 | 0.3303 | 0.3578 |
| 0.9358 | 42.68 | 3500 | 0.3003 | 0.3051 |
| 0.8358 | 48.78 | 4000 | 0.3045 | 0.2884 |
| 0.7647 | 54.88 | 4500 | 0.2866 | 0.2677 |
| 0.7482 | 60.98 | 5000 | 0.2829 | 0.2585 |
| 0.6943 | 67.07 | 5500 | 0.2782 | 0.2478 |
| 0.6586 | 73.17 | 6000 | 0.2911 | 0.2537 |
| 0.6425 | 79.27 | 6500 | 0.2817 | 0.2462 |
| 0.6067 | 85.37 | 7000 | 0.2910 | 0.2436 |
| 0.5974 | 91.46 | 7500 | 0.2875 | 0.2430 |
| 0.5812 | 97.56 | 8000 | 0.2852 | 0.2396 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-maltese
|
DrishtiSharma
| 2022-03-23T18:35:17Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"mt",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- mt
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- mt
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-maltese
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: mt
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-maltese
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2994
- Wer: 0.2781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1800
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.0174 | 9.01 | 1000 | 3.0552 | 1.0 |
| 1.0446 | 18.02 | 2000 | 0.6708 | 0.7577 |
| 0.7995 | 27.03 | 3000 | 0.4202 | 0.4770 |
| 0.6978 | 36.04 | 4000 | 0.3054 | 0.3494 |
| 0.6189 | 45.05 | 5000 | 0.2878 | 0.3154 |
| 0.5667 | 54.05 | 6000 | 0.3114 | 0.3286 |
| 0.5173 | 63.06 | 7000 | 0.3085 | 0.3021 |
| 0.4682 | 72.07 | 8000 | 0.3058 | 0.2969 |
| 0.451 | 81.08 | 9000 | 0.3146 | 0.2907 |
| 0.4213 | 90.09 | 10000 | 0.3030 | 0.2881 |
| 0.4005 | 99.1 | 11000 | 0.3001 | 0.2789 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
### Evaluation Script
!python eval.py \
--model_id DrishtiSharma/wav2vec2-large-xls-r-300m-maltese \
--dataset mozilla-foundation/common_voice_8_0 --config mt --split test --log_outputs
|
AndrewMcDowell/wav2vec2-xls-r-300m-german-de
|
AndrewMcDowell
| 2022-03-23T18:35:11Z | 36 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - German
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: de
metrics:
- name: Test WER
type: wer
value: 20.16
- name: Test CER
type: cer
value: 5.06
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: de
metrics:
- name: Test WER
type: wer
value: 39.79
- name: Test CER
type: cer
value: 15.02
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: de
metrics:
- name: Test WER
type: wer
value: 47.95
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment.
eval results:
WER: 0.20161578657865786
CER: 0.05062357805269733
-->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - DE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1768
- Wer: 0.2016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.7531 | 0.04 | 500 | 5.4564 | 1.0 |
| 2.9882 | 0.08 | 1000 | 3.0041 | 1.0 |
| 2.1953 | 0.13 | 1500 | 1.1723 | 0.7121 |
| 1.2406 | 0.17 | 2000 | 0.3656 | 0.3623 |
| 1.1294 | 0.21 | 2500 | 0.2843 | 0.2926 |
| 1.0731 | 0.25 | 3000 | 0.2554 | 0.2664 |
| 1.051 | 0.3 | 3500 | 0.2387 | 0.2535 |
| 1.0479 | 0.34 | 4000 | 0.2345 | 0.2512 |
| 1.0026 | 0.38 | 4500 | 0.2270 | 0.2452 |
| 0.9921 | 0.42 | 5000 | 0.2212 | 0.2353 |
| 0.9839 | 0.47 | 5500 | 0.2141 | 0.2330 |
| 0.9907 | 0.51 | 6000 | 0.2122 | 0.2334 |
| 0.9788 | 0.55 | 6500 | 0.2114 | 0.2270 |
| 0.9687 | 0.59 | 7000 | 0.2066 | 0.2323 |
| 0.9777 | 0.64 | 7500 | 0.2033 | 0.2237 |
| 0.9476 | 0.68 | 8000 | 0.2020 | 0.2194 |
| 0.9625 | 0.72 | 8500 | 0.1977 | 0.2191 |
| 0.9497 | 0.76 | 9000 | 0.1976 | 0.2175 |
| 0.9781 | 0.81 | 9500 | 0.1956 | 0.2159 |
| 0.9552 | 0.85 | 10000 | 0.1958 | 0.2191 |
| 0.9345 | 0.89 | 10500 | 0.1964 | 0.2158 |
| 0.9528 | 0.93 | 11000 | 0.1926 | 0.2154 |
| 0.9502 | 0.98 | 11500 | 0.1953 | 0.2149 |
| 0.9358 | 1.02 | 12000 | 0.1927 | 0.2167 |
| 0.941 | 1.06 | 12500 | 0.1901 | 0.2115 |
| 0.9287 | 1.1 | 13000 | 0.1936 | 0.2090 |
| 0.9491 | 1.15 | 13500 | 0.1900 | 0.2104 |
| 0.9478 | 1.19 | 14000 | 0.1931 | 0.2120 |
| 0.946 | 1.23 | 14500 | 0.1914 | 0.2134 |
| 0.9499 | 1.27 | 15000 | 0.1931 | 0.2173 |
| 0.9346 | 1.32 | 15500 | 0.1913 | 0.2105 |
| 0.9509 | 1.36 | 16000 | 0.1902 | 0.2137 |
| 0.9294 | 1.4 | 16500 | 0.1895 | 0.2086 |
| 0.9418 | 1.44 | 17000 | 0.1913 | 0.2183 |
| 0.9302 | 1.49 | 17500 | 0.1884 | 0.2114 |
| 0.9418 | 1.53 | 18000 | 0.1894 | 0.2108 |
| 0.9363 | 1.57 | 18500 | 0.1886 | 0.2132 |
| 0.9338 | 1.61 | 19000 | 0.1856 | 0.2078 |
| 0.9185 | 1.66 | 19500 | 0.1852 | 0.2056 |
| 0.9216 | 1.7 | 20000 | 0.1874 | 0.2095 |
| 0.9176 | 1.74 | 20500 | 0.1873 | 0.2078 |
| 0.9288 | 1.78 | 21000 | 0.1865 | 0.2097 |
| 0.9278 | 1.83 | 21500 | 0.1869 | 0.2100 |
| 0.9295 | 1.87 | 22000 | 0.1878 | 0.2095 |
| 0.9221 | 1.91 | 22500 | 0.1852 | 0.2121 |
| 0.924 | 1.95 | 23000 | 0.1855 | 0.2042 |
| 0.9104 | 2.0 | 23500 | 0.1858 | 0.2105 |
| 0.9284 | 2.04 | 24000 | 0.1850 | 0.2080 |
| 0.9162 | 2.08 | 24500 | 0.1839 | 0.2045 |
| 0.9111 | 2.12 | 25000 | 0.1838 | 0.2080 |
| 0.91 | 2.17 | 25500 | 0.1889 | 0.2106 |
| 0.9152 | 2.21 | 26000 | 0.1856 | 0.2026 |
| 0.9209 | 2.25 | 26500 | 0.1891 | 0.2133 |
| 0.9094 | 2.29 | 27000 | 0.1857 | 0.2089 |
| 0.9065 | 2.34 | 27500 | 0.1840 | 0.2052 |
| 0.9156 | 2.38 | 28000 | 0.1833 | 0.2062 |
| 0.8986 | 2.42 | 28500 | 0.1789 | 0.2001 |
| 0.9045 | 2.46 | 29000 | 0.1769 | 0.2022 |
| 0.9039 | 2.51 | 29500 | 0.1819 | 0.2073 |
| 0.9145 | 2.55 | 30000 | 0.1828 | 0.2063 |
| 0.9081 | 2.59 | 30500 | 0.1811 | 0.2049 |
| 0.9252 | 2.63 | 31000 | 0.1833 | 0.2086 |
| 0.8957 | 2.68 | 31500 | 0.1795 | 0.2083 |
| 0.891 | 2.72 | 32000 | 0.1809 | 0.2058 |
| 0.9023 | 2.76 | 32500 | 0.1812 | 0.2061 |
| 0.8918 | 2.8 | 33000 | 0.1775 | 0.1997 |
| 0.8852 | 2.85 | 33500 | 0.1790 | 0.1997 |
| 0.8928 | 2.89 | 34000 | 0.1767 | 0.2013 |
| 0.9079 | 2.93 | 34500 | 0.1735 | 0.1986 |
| 0.9032 | 2.97 | 35000 | 0.1793 | 0.2024 |
| 0.9018 | 3.02 | 35500 | 0.1778 | 0.2027 |
| 0.8846 | 3.06 | 36000 | 0.1776 | 0.2046 |
| 0.8848 | 3.1 | 36500 | 0.1812 | 0.2064 |
| 0.9062 | 3.14 | 37000 | 0.1800 | 0.2018 |
| 0.9011 | 3.19 | 37500 | 0.1783 | 0.2049 |
| 0.8996 | 3.23 | 38000 | 0.1810 | 0.2036 |
| 0.893 | 3.27 | 38500 | 0.1805 | 0.2056 |
| 0.897 | 3.31 | 39000 | 0.1773 | 0.2035 |
| 0.8992 | 3.36 | 39500 | 0.1804 | 0.2054 |
| 0.8987 | 3.4 | 40000 | 0.1768 | 0.2016 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-german-de --dataset mozilla-foundation/common_voice_7_0 --config de --split test --log_outputs
```
2. To evaluate on test dev data
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-german-de --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
|
sammy786/wav2vec2-xlsr-bashkir
|
sammy786
| 2022-03-23T18:35:07Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ba",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ba
license: apache-2.0
tags:
- automatic-speech-recognition
- ba
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-bashkir
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ba
metrics:
- name: Test WER
type: wer
value: 11.32
- name: Test CER
type: cer
value: 2.34
---
# sammy786/wav2vec2-xlsr-bashkir
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ba dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss:
- Wer:
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|:----:|:-------------:|:---------------:|:--------:|
| 200 | 5.387100 | 1.982867 | 1.000000 |
| 400 | 1.269800 | 0.369958 | 0.545755 |
| 600 | 0.903600 | 0.287705 | 0.465594 |
| 800 | 0.787300 | 0.235142 | 0.417091 |
| 1000 | 0.816300 | 0.206325 | 0.390534 |
| 1200 | 0.700500 | 0.197106 | 0.383987 |
| 1400 | 0.707100 | 0.179855 | 0.381368 |
| 1600 | 0.657800 | 0.181605 | 0.370593 |
| 1800 | 0.647800 | 0.168626 | 0.358767 |
| 2000 | 0.650700 | 0.164833 | 0.351483 |
| 2200 | 0.490900 | 0.168133 | 0.363309 |
| 2400 | 0.431000 | 0.161201 | 0.344350 |
| 2600 | 0.372100 | 0.160254 | 0.338280 |
| 2800 | 0.367500 | 0.150885 | 0.329687 |
| 3000 | 0.351300 | 0.154112 | 0.331392 |
| 3200 | 0.314800 | 0.147147 | 0.326700 |
| 3400 | 0.316800 | 0.142681 | 0.325090 |
| 3600 | 0.313000 | 0.138736 | 0.319553 |
| 3800 | 0.291800 | 0.138166 | 0.315570 |
| 4000 | 0.311300 | 0.135977 | 0.322894 |
| 4200 | 0.304900 | 0.128820 | 0.308627 |
| 4400 | 0.301600 | 0.129475 | 0.307440 |
| 4600 | 0.281800 | 0.131863 | 0.305967 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-bashkir --dataset mozilla-foundation/common_voice_8_0 --config ba --split test
```
|
masapasa/xls-r-300m-it-cv8-ds13
|
masapasa
| 2022-03-23T18:35:02Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"it",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- it
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: ''
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: it
metrics:
- name: Test WER
type: wer
value: 100.0
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: it
metrics:
- name: Test WER
type: wer
value: 100.0
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: it
metrics:
- name: Test WER
type: wer
value: 100.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3549
- Wer: 0.3827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4129 | 5.49 | 500 | 3.3224 | 1.0 |
| 2.9323 | 10.98 | 1000 | 2.9128 | 1.0000 |
| 1.6839 | 16.48 | 1500 | 0.7740 | 0.6854 |
| 1.485 | 21.97 | 2000 | 0.5830 | 0.5976 |
| 1.362 | 27.47 | 2500 | 0.4866 | 0.4905 |
| 1.2752 | 32.96 | 3000 | 0.4240 | 0.4967 |
| 1.1957 | 38.46 | 3500 | 0.3899 | 0.4258 |
| 1.1646 | 43.95 | 4000 | 0.3597 | 0.4014 |
| 1.1265 | 49.45 | 4500 | 0.3559 | 0.3829 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
manifoldix/xlsr-sg-lm
|
manifoldix
| 2022-03-23T18:34:59Z | 9 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"robust-speech-event",
"gsw",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: gsw
tags:
- hf-asr-leaderboard
- robust-speech-event
widget:
- example_title: swiss parliament sample 1
src: https://huggingface.co/manifoldix/xlsr-sg-lm/resolve/main/07e73bcaa2ab192aea9524d72db45f34f274d1b3d5672434c462d32d44d792be.mp3
- example_title: swiss parliament sample 2
src: https://huggingface.co/manifoldix/xlsr-sg-lm/resolve/main/14a2f855363920f111c7b30e8632c19e5f340ab5031e1ed2621db39baf452ae0.mp3
model-index:
- name: XLS-R-1b Wav2Vec2 Swiss German
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test WER on Swiss parliament
type: wer
value: 34.6%
- name: Test WER on Swiss dialect test set
type: wer
value: 40%
---
## XLSR-1b Swiss German
Fine-tuned on the Swiss parliament dataset from FHNW v1 (70h).
Tested on the Swiss parliament test set with a WER of 34.6%
Tested on the "Swiss German Dialects" with a WER of 40%
Both test sets can be accessed here: [fhnw_datasets](https://www.cs.technik.fhnw.ch/i4ds-datasets)
The Swiss German dialect private test set has been uploaded on huggingface: [huggingface_swiss_dialects](https://huggingface.co/datasets/manifoldix/swg_parliament_fhnw)
|
infinitejoy/wav2vec2-large-xls-r-300m-galician
|
infinitejoy
| 2022-03-23T18:34:49Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"gl",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- gl
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- gl
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Galician
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: gl
metrics:
- name: Test WER
type: wer
value: 101.54
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: gl
metrics:
- name: Test WER
type: wer
value: 105.69
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: gl
metrics:
- name: Test WER
type: wer
value: 101.95
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-galician
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - GL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1525
- Wer: 0.1542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0067 | 4.35 | 500 | 2.9632 | 1.0 |
| 1.4939 | 8.7 | 1000 | 0.5005 | 0.4157 |
| 0.9982 | 13.04 | 1500 | 0.1967 | 0.1857 |
| 0.8726 | 17.39 | 2000 | 0.1587 | 0.1564 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-finnish
|
infinitejoy
| 2022-03-23T18:34:46Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- fi
license: apache-2.0
tags:
- automatic-speech-recognition
- fi
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Finnish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 29.97
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-finnish
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2307
- Wer: 0.2984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 70.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9032 | 4.39 | 500 | 2.8768 | 1.0 |
| 1.5724 | 8.77 | 1000 | 0.5638 | 0.6438 |
| 1.1818 | 13.16 | 1500 | 0.3338 | 0.4759 |
| 1.0798 | 17.54 | 2000 | 0.2876 | 0.4086 |
| 1.0296 | 21.93 | 2500 | 0.2694 | 0.4248 |
| 1.0014 | 26.32 | 3000 | 0.2626 | 0.3733 |
| 0.9616 | 30.7 | 3500 | 0.2391 | 0.3294 |
| 0.9303 | 35.09 | 4000 | 0.2352 | 0.3218 |
| 0.9248 | 39.47 | 4500 | 0.2351 | 0.3207 |
| 0.8837 | 43.86 | 5000 | 0.2341 | 0.3103 |
| 0.8887 | 48.25 | 5500 | 0.2311 | 0.3115 |
| 0.8529 | 52.63 | 6000 | 0.2230 | 0.3001 |
| 0.8404 | 57.02 | 6500 | 0.2279 | 0.3054 |
| 0.8242 | 61.4 | 7000 | 0.2298 | 0.3006 |
| 0.8288 | 65.79 | 7500 | 0.2333 | 0.2997 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
emre/wav2vec2-xls-r-300m-gl-CV8
|
emre
| 2022-03-23T18:34:43Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"gl",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language: gl
tags:
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-gl-CV8
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice gl
type: common_voice
args: gl
metrics:
- name: Test WER
type: wer
value: 0.208
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: gl
metrics:
- name: Test WER
type: wer
value: 22.94
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: gl
metrics:
- name: Test WER
type: wer
value: 47.82
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: gl
metrics:
- name: Test WER
type: wer
value: 50.8
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-gl-CV8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2151
- Wer: 0.2080
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9427 | 4.9 | 500 | 2.8801 | 1.0 |
| 2.1594 | 9.8 | 1000 | 0.4092 | 0.4001 |
| 0.7332 | 14.71 | 1500 | 0.2151 | 0.2080 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
vutankiet2901/wav2vec2-xls-r-1b-ja
|
vutankiet2901
| 2022-03-23T18:34:17Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common-voice",
"hf-asr-leaderboard",
"ja",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language:
- ja
tags:
- automatic-speech-recognition
- common-voice
- hf-asr-leaderboard
- ja
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-1b
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: ja
metrics:
- name: Test WER (with LM)
type: wer
value: 11.77
- name: Test CER (with LM)
type: cer
value: 5.22
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: ja
metrics:
- name: Test WER (with LM)
type: wer
value: 12.23
- name: Test CER (with LM)
type: cer
value: 5.33
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ja
metrics:
- name: Test WER (with LM)
type: wer
value: 29.35
- name: Test CER (with LM)
type: cer
value: 16.43
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ja
metrics:
- name: Test CER
type: cer
value: 19.48
---
## Model description
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - JA
### Benchmark WER result:
| | [COMMON VOICE 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0)
|---|---|---|
|without LM| 16.97 | 17.95 |
|with 4-grams LM| 11.77 | 12.23|
### Benchmark CER result:
| | [COMMON VOICE 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0)
|---|---|---|
|without LM| 6.82 | 7.05 |
|with 4-grams LM| 5.22 | 5.33 |
## Evaluation
Please use the eval.py file to run the evaluation:
```python
pip install mecab-python3 unidic-lite pykakasi
python eval.py --model_id vutankiet2901/wav2vec2-xls-r-1b-ja --dataset mozilla-foundation/common_voice_8_0 --config ja --split test --log_outputs
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 3.484 | 9.49 | 1500 | 1.1849 | 0.7543 | 0.4099 |
| 1.3582 | 18.98 | 3000 | 0.4320 | 0.3489 | 0.1591 |
| 1.1716 | 28.48 | 4500 | 0.3835 | 0.3175 | 0.1454 |
| 1.0951 | 37.97 | 6000 | 0.3732 | 0.3033 | 0.1405 |
| 1.04 | 47.47 | 7500 | 0.3485 | 0.2898 | 0.1360 |
| 0.9768 | 56.96 | 9000 | 0.3386 | 0.2787 | 0.1309 |
| 0.9129 | 66.45 | 10500 | 0.3363 | 0.2711 | 0.1272 |
| 0.8614 | 75.94 | 12000 | 0.3386 | 0.2676 | 0.1260 |
| 0.8092 | 85.44 | 13500 | 0.3356 | 0.2610 | 0.1240 |
| 0.7658 | 94.93 | 15000 | 0.3316 | 0.2564 | 0.1218 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
shahukareem/xls-r-300m-dv
|
shahukareem
| 2022-03-23T18:34:14Z | 57 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dv",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- dv
license: apache-2.0
tags:
- automatic-speech-recognition
- dv
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Dhivehi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: dv
metrics:
- name: Test WER
type: wer
value: 21.31
- name: Test CER
type: cer
value: 3.82
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-dv
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2855
- Wer: 0.2665
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.3386 | 0.66 | 400 | 1.1411 | 0.9432 |
| 0.6543 | 1.33 | 800 | 0.5099 | 0.6749 |
| 0.4646 | 1.99 | 1200 | 0.4133 | 0.5968 |
| 0.3748 | 2.65 | 1600 | 0.3534 | 0.5515 |
| 0.3323 | 3.32 | 2000 | 0.3635 | 0.5527 |
| 0.3269 | 3.98 | 2400 | 0.3587 | 0.5423 |
| 0.2984 | 4.64 | 2800 | 0.3340 | 0.5073 |
| 0.2841 | 5.31 | 3200 | 0.3279 | 0.5004 |
| 0.2664 | 5.97 | 3600 | 0.3114 | 0.4845 |
| 0.2397 | 6.63 | 4000 | 0.3174 | 0.4920 |
| 0.2332 | 7.3 | 4400 | 0.3110 | 0.4911 |
| 0.2304 | 7.96 | 4800 | 0.3123 | 0.4785 |
| 0.2134 | 8.62 | 5200 | 0.2984 | 0.4557 |
| 0.2066 | 9.29 | 5600 | 0.3013 | 0.4723 |
| 0.1951 | 9.95 | 6000 | 0.2934 | 0.4487 |
| 0.1806 | 10.61 | 6400 | 0.2802 | 0.4547 |
| 0.1727 | 11.28 | 6800 | 0.2842 | 0.4333 |
| 0.1666 | 11.94 | 7200 | 0.2873 | 0.4272 |
| 0.1562 | 12.6 | 7600 | 0.3042 | 0.4373 |
| 0.1483 | 13.27 | 8000 | 0.3122 | 0.4313 |
| 0.1465 | 13.93 | 8400 | 0.2760 | 0.4226 |
| 0.1335 | 14.59 | 8800 | 0.3112 | 0.4243 |
| 0.1293 | 15.26 | 9200 | 0.3002 | 0.4133 |
| 0.1264 | 15.92 | 9600 | 0.2985 | 0.4145 |
| 0.1179 | 16.58 | 10000 | 0.2925 | 0.4012 |
| 0.1171 | 17.25 | 10400 | 0.3127 | 0.4012 |
| 0.1141 | 17.91 | 10800 | 0.2980 | 0.3908 |
| 0.108 | 18.57 | 11200 | 0.3108 | 0.3951 |
| 0.1045 | 19.24 | 11600 | 0.3269 | 0.3908 |
| 0.1047 | 19.9 | 12000 | 0.2998 | 0.3868 |
| 0.0937 | 20.56 | 12400 | 0.2918 | 0.3875 |
| 0.0949 | 21.23 | 12800 | 0.2906 | 0.3657 |
| 0.0879 | 21.89 | 13200 | 0.2974 | 0.3731 |
| 0.0854 | 22.55 | 13600 | 0.2943 | 0.3711 |
| 0.0851 | 23.22 | 14000 | 0.2919 | 0.3580 |
| 0.0789 | 23.88 | 14400 | 0.2983 | 0.3560 |
| 0.0796 | 24.54 | 14800 | 0.3131 | 0.3544 |
| 0.0761 | 25.21 | 15200 | 0.2996 | 0.3616 |
| 0.0755 | 25.87 | 15600 | 0.2972 | 0.3506 |
| 0.0726 | 26.53 | 16000 | 0.2902 | 0.3474 |
| 0.0707 | 27.2 | 16400 | 0.3083 | 0.3480 |
| 0.0669 | 27.86 | 16800 | 0.3035 | 0.3330 |
| 0.0637 | 28.52 | 17200 | 0.2963 | 0.3370 |
| 0.0596 | 29.19 | 17600 | 0.2830 | 0.3326 |
| 0.0583 | 29.85 | 18000 | 0.2969 | 0.3287 |
| 0.0566 | 30.51 | 18400 | 0.3002 | 0.3480 |
| 0.0574 | 31.18 | 18800 | 0.2916 | 0.3296 |
| 0.0536 | 31.84 | 19200 | 0.2933 | 0.3225 |
| 0.0548 | 32.5 | 19600 | 0.2900 | 0.3179 |
| 0.0506 | 33.17 | 20000 | 0.3073 | 0.3225 |
| 0.0511 | 33.83 | 20400 | 0.2925 | 0.3275 |
| 0.0483 | 34.49 | 20800 | 0.2919 | 0.3245 |
| 0.0456 | 35.16 | 21200 | 0.2859 | 0.3105 |
| 0.0445 | 35.82 | 21600 | 0.2864 | 0.3080 |
| 0.0437 | 36.48 | 22000 | 0.2989 | 0.3084 |
| 0.04 | 37.15 | 22400 | 0.2887 | 0.3060 |
| 0.0406 | 37.81 | 22800 | 0.2870 | 0.3013 |
| 0.0397 | 38.47 | 23200 | 0.2793 | 0.3020 |
| 0.0383 | 39.14 | 23600 | 0.2955 | 0.2943 |
| 0.0345 | 39.8 | 24000 | 0.2813 | 0.2905 |
| 0.0331 | 40.46 | 24400 | 0.2845 | 0.2845 |
| 0.0338 | 41.13 | 24800 | 0.2832 | 0.2925 |
| 0.0333 | 41.79 | 25200 | 0.2889 | 0.2849 |
| 0.0325 | 42.45 | 25600 | 0.2808 | 0.2847 |
| 0.0314 | 43.12 | 26000 | 0.2867 | 0.2801 |
| 0.0288 | 43.78 | 26400 | 0.2865 | 0.2834 |
| 0.0291 | 44.44 | 26800 | 0.2863 | 0.2806 |
| 0.0269 | 45.11 | 27200 | 0.2941 | 0.2736 |
| 0.0275 | 45.77 | 27600 | 0.2897 | 0.2736 |
| 0.0271 | 46.43 | 28000 | 0.2857 | 0.2695 |
| 0.0251 | 47.1 | 28400 | 0.2881 | 0.2702 |
| 0.0243 | 47.76 | 28800 | 0.2901 | 0.2684 |
| 0.0244 | 48.42 | 29200 | 0.2849 | 0.2679 |
| 0.0232 | 49.09 | 29600 | 0.2849 | 0.2677 |
| 0.0224 | 49.75 | 30000 | 0.2855 | 0.2665 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
reach-vb/wav2vec2-large-xls-r-1B-common_voice7-lv-ft
|
reach-vb
| 2022-03-23T18:34:08Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"lv",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language:
- lv
tags:
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-1B-common_voice7-lv-ft
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: lv
metrics:
- name: Test WER
type: wer
value: 11.179
- name: Test CER
type: cer
value: 2.78
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: lv
metrics:
- name: Test WER
type: wer
value: 44.33
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: lv
metrics:
- name: Test WER
type: wer
value: 50.89
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1B-common_voice7-lv-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1582
- Wer: 0.1137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 900
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6292 | 5.26 | 500 | 1.5562 | 0.9263 |
| 0.1303 | 10.53 | 1000 | 0.8107 | 0.7666 |
| 0.0974 | 15.79 | 1500 | 0.5290 | 0.4979 |
| 0.0724 | 21.05 | 2000 | 0.2941 | 0.2247 |
| 0.0591 | 26.32 | 2500 | 0.2838 | 0.2125 |
| 0.0494 | 31.58 | 3000 | 0.2589 | 0.2102 |
| 0.0417 | 36.84 | 3500 | 0.1987 | 0.1760 |
| 0.0375 | 42.11 | 4000 | 0.1934 | 0.1690 |
| 0.031 | 47.37 | 4500 | 0.1630 | 0.1460 |
| 0.027 | 52.63 | 5000 | 0.1957 | 0.1447 |
| 0.0256 | 57.89 | 5500 | 0.1747 | 0.1368 |
| 0.0206 | 63.16 | 6000 | 0.1602 | 0.1299 |
| 0.0178 | 68.42 | 6500 | 0.1809 | 0.1273 |
| 0.0154 | 73.68 | 7000 | 0.1686 | 0.1216 |
| 0.0137 | 78.95 | 7500 | 0.1585 | 0.1241 |
| 0.0128 | 84.21 | 8000 | 0.1783 | 0.1278 |
| 0.011 | 89.47 | 8500 | 0.1653 | 0.1228 |
| 0.0096 | 94.74 | 9000 | 0.1620 | 0.1161 |
| 0.0091 | 100.0 | 9500 | 0.1582 | 0.1137 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
|
lgris/wav2vec2_base_10k_8khz_pt_cv7_2
|
lgris
| 2022-03-23T18:34:03Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"pt",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- pt
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2_base_10k_8khz_pt_cv7_2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: pt
metrics:
- name: Test WER
type: wer
value: 36.9
- name: Test CER
type: cer
value: 14.82
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 40.53
- name: Test CER
type: cer
value: 16.95
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pt
metrics:
- name: Test WER
type: wer
value: 37.15
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: pt
metrics:
- name: Test WER
type: wer
value: 38.95
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_base_10k_8khz_pt_cv7_2
This model is a fine-tuned version of [lgris/seasr_2022_base_10k_8khz_pt](https://huggingface.co/lgris/seasr_2022_base_10k_8khz_pt) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 76.3426
- Wer: 0.1979
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 189.1362 | 0.65 | 500 | 80.6347 | 0.2139 |
| 174.2587 | 1.3 | 1000 | 80.2062 | 0.2116 |
| 164.676 | 1.95 | 1500 | 78.2161 | 0.2073 |
| 176.5856 | 2.6 | 2000 | 78.8920 | 0.2074 |
| 164.3583 | 3.25 | 2500 | 77.2865 | 0.2066 |
| 161.414 | 3.9 | 3000 | 77.8888 | 0.2048 |
| 158.283 | 4.55 | 3500 | 77.3472 | 0.2033 |
| 159.2265 | 5.19 | 4000 | 79.0953 | 0.2036 |
| 156.3967 | 5.84 | 4500 | 76.6855 | 0.2029 |
| 154.2743 | 6.49 | 5000 | 77.7785 | 0.2015 |
| 156.6497 | 7.14 | 5500 | 77.1220 | 0.2033 |
| 157.3038 | 7.79 | 6000 | 76.2926 | 0.2027 |
| 162.8151 | 8.44 | 6500 | 76.7602 | 0.2013 |
| 151.8613 | 9.09 | 7000 | 77.4777 | 0.2011 |
| 153.0225 | 9.74 | 7500 | 76.5206 | 0.2001 |
| 157.52 | 10.39 | 8000 | 76.1061 | 0.2006 |
| 145.0592 | 11.04 | 8500 | 76.7855 | 0.1992 |
| 150.0066 | 11.69 | 9000 | 76.0058 | 0.1988 |
| 146.8128 | 12.34 | 9500 | 76.2853 | 0.1987 |
| 146.9148 | 12.99 | 10000 | 76.3426 | 0.1979 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
lgris/wav2vec2-xls-r-pt-cv7-from-bp400h
|
lgris
| 2022-03-23T18:34:00Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"pt",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- pt
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- pt
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
license: apache-2.0
model-index:
- name: wav2vec2-xls-r-pt-cv7-from-bp400h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: pt
metrics:
- name: Test WER
type: wer
value: 12.13
- name: Test CER
type: cer
value: 3.68
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 28.23
- name: Test CER
type: cer
value: 12.58
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pt
metrics:
- name: Test WER
type: wer
value: 26.58
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: pt
metrics:
- name: Test WER
type: wer
value: 26.86
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-pt-cv7-from-bp400h
This model is a fine-tuned version of [lgris/bp_400h_xlsr2_300M](https://huggingface.co/lgris/bp_400h_xlsr2_300M) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1535
- Wer: 0.1254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4991 | 0.13 | 100 | 0.1774 | 0.1464 |
| 0.4655 | 0.26 | 200 | 0.1884 | 0.1568 |
| 0.4689 | 0.39 | 300 | 0.2282 | 0.1672 |
| 0.4662 | 0.52 | 400 | 0.1997 | 0.1584 |
| 0.4592 | 0.65 | 500 | 0.1989 | 0.1663 |
| 0.4533 | 0.78 | 600 | 0.2004 | 0.1698 |
| 0.4391 | 0.91 | 700 | 0.1888 | 0.1642 |
| 0.4655 | 1.04 | 800 | 0.1921 | 0.1624 |
| 0.4138 | 1.17 | 900 | 0.1950 | 0.1602 |
| 0.374 | 1.3 | 1000 | 0.2077 | 0.1658 |
| 0.4064 | 1.43 | 1100 | 0.1945 | 0.1596 |
| 0.3922 | 1.56 | 1200 | 0.2069 | 0.1665 |
| 0.4226 | 1.69 | 1300 | 0.1962 | 0.1573 |
| 0.3974 | 1.82 | 1400 | 0.1919 | 0.1553 |
| 0.3631 | 1.95 | 1500 | 0.1854 | 0.1573 |
| 0.3797 | 2.08 | 1600 | 0.1902 | 0.1550 |
| 0.3287 | 2.21 | 1700 | 0.1926 | 0.1598 |
| 0.3568 | 2.34 | 1800 | 0.1888 | 0.1534 |
| 0.3415 | 2.47 | 1900 | 0.1834 | 0.1502 |
| 0.3545 | 2.6 | 2000 | 0.1906 | 0.1560 |
| 0.3344 | 2.73 | 2100 | 0.1804 | 0.1524 |
| 0.3308 | 2.86 | 2200 | 0.1741 | 0.1485 |
| 0.344 | 2.99 | 2300 | 0.1787 | 0.1455 |
| 0.309 | 3.12 | 2400 | 0.1773 | 0.1448 |
| 0.312 | 3.25 | 2500 | 0.1738 | 0.1440 |
| 0.3066 | 3.38 | 2600 | 0.1727 | 0.1417 |
| 0.2999 | 3.51 | 2700 | 0.1692 | 0.1436 |
| 0.2985 | 3.64 | 2800 | 0.1732 | 0.1430 |
| 0.3058 | 3.77 | 2900 | 0.1754 | 0.1402 |
| 0.2943 | 3.9 | 3000 | 0.1691 | 0.1379 |
| 0.2813 | 4.03 | 3100 | 0.1754 | 0.1376 |
| 0.2733 | 4.16 | 3200 | 0.1639 | 0.1363 |
| 0.2592 | 4.29 | 3300 | 0.1675 | 0.1349 |
| 0.2697 | 4.42 | 3400 | 0.1618 | 0.1360 |
| 0.2538 | 4.55 | 3500 | 0.1658 | 0.1348 |
| 0.2746 | 4.67 | 3600 | 0.1674 | 0.1325 |
| 0.2655 | 4.8 | 3700 | 0.1655 | 0.1319 |
| 0.2745 | 4.93 | 3800 | 0.1665 | 0.1316 |
| 0.2617 | 5.06 | 3900 | 0.1600 | 0.1311 |
| 0.2674 | 5.19 | 4000 | 0.1623 | 0.1311 |
| 0.237 | 5.32 | 4100 | 0.1591 | 0.1315 |
| 0.2669 | 5.45 | 4200 | 0.1584 | 0.1295 |
| 0.2476 | 5.58 | 4300 | 0.1572 | 0.1285 |
| 0.2445 | 5.71 | 4400 | 0.1580 | 0.1271 |
| 0.2207 | 5.84 | 4500 | 0.1567 | 0.1269 |
| 0.2289 | 5.97 | 4600 | 0.1536 | 0.1260 |
| 0.2438 | 6.1 | 4700 | 0.1530 | 0.1260 |
| 0.227 | 6.23 | 4800 | 0.1544 | 0.1249 |
| 0.2256 | 6.36 | 4900 | 0.1543 | 0.1254 |
| 0.2184 | 6.49 | 5000 | 0.1535 | 0.1254 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
infinitejoy/wav2vec2-large-xls-r-300m-romanian
|
infinitejoy
| 2022-03-23T18:33:55Z | 471 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"ro",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ro
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_7_0
- ro
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Romanian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: ro
metrics:
- name: Test WER
type: wer
value: 14.194
- name: Test CER
type: cer
value: 3.288
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ro
metrics:
- name: Test WER
type: wer
value: 40.869
- name: Test CER
type: cer
value: 12.049
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ro
metrics:
- name: Test WER
type: wer
value: 47.2
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-romanian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - RO dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1167
- Wer: 0.1421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.1973 | 8.89 | 2000 | 0.4481 | 0.4849 |
| 0.6005 | 17.78 | 4000 | 0.1420 | 0.1777 |
| 0.5248 | 26.67 | 6000 | 0.1303 | 0.1651 |
| 0.4871 | 35.56 | 8000 | 0.1207 | 0.1523 |
| 0.4428 | 44.44 | 10000 | 0.1143 | 0.1425 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-basaa
|
infinitejoy
| 2022-03-23T18:33:50Z | 10 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"bas",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- bas
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Basaa
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: bas
metrics:
- name: Test WER
type: wer
value: 104.08
- name: Test CER
type: cer
value: 228.48
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-basaa
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BAS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5975
- Wer: 0.4981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 2.9287 | 15.62 | 500 | 2.8774 | 1.0 |
| 1.1182 | 31.25 | 1000 | 0.6248 | 0.7131 |
| 0.8329 | 46.88 | 1500 | 0.5573 | 0.5792 |
| 0.7109 | 62.5 | 2000 | 0.5420 | 0.5683 |
| 0.6295 | 78.12 | 2500 | 0.5166 | 0.5395 |
| 0.5715 | 93.75 | 3000 | 0.5487 | 0.5629 |
| 0.5016 | 109.38 | 3500 | 0.5370 | 0.5471 |
| 0.4661 | 125.0 | 4000 | 0.5621 | 0.5395 |
| 0.423 | 140.62 | 4500 | 0.5658 | 0.5248 |
| 0.3793 | 156.25 | 5000 | 0.5921 | 0.4981 |
| 0.3651 | 171.88 | 5500 | 0.5987 | 0.4888 |
| 0.3351 | 187.5 | 6000 | 0.6017 | 0.4948 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
Akashpb13/xlsr_hungarian_new
|
Akashpb13
| 2022-03-23T18:33:33Z | 41 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"hu",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- hu
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- hu
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: Akashpb13/xlsr_hungarian_new
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: hu
metrics:
- name: Test WER
type: wer
value: 0.2851621517163838
- name: Test CER
type: cer
value: 0.06112982522287432
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: hu
metrics:
- name: Test WER
type: wer
value: 0.2851621517163838
- name: Test CER
type: cer
value: 0.06112982522287432
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: hu
metrics:
- name: Test WER
type: wer
value: 47.15
---
# Akashpb13/xlsr_hungarian_new
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - hu dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other and dev datasets):
- Loss: 0.197464
- Wer: 0.330094
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice hungarian train.tsv, dev.tsv, invalidated.tsv, reported.tsv, and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000095637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 16
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 500 | 4.785300 | 0.952295 | 0.796236 |
| 1000 | 0.535800 | 0.217474 | 0.381613 |
| 1500 | 0.258400 | 0.205524 | 0.345056 |
| 2000 | 0.202800 | 0.198680 | 0.336264 |
| 2500 | 0.182700 | 0.197464 | 0.330094 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/xlsr_hungarian_new --dataset mozilla-foundation/common_voice_8_0 --config hu --split test
```
|
abidlabs/speech-text
|
abidlabs
| 2022-03-23T18:33:30Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"en",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-07T19:09:18Z |
---
language: en
datasets:
- common_voice
- mozilla-foundation/common_voice_6_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- en
- hf-asr-leaderboard
- mozilla-foundation/common_voice_6_0
- robust-speech-event
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 English by Jonatas Grosman
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice en
type: common_voice
args: en
metrics:
- name: Test WER
type: wer
value: 19.06
- name: Test CER
type: cer
value: 7.69
- name: Test WER (+LM)
type: wer
value: 14.81
- name: Test CER (+LM)
type: cer
value: 6.84
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: en
metrics:
- name: Dev WER
type: wer
value: 27.72
- name: Dev CER
type: cer
value: 11.65
- name: Dev WER (+LM)
type: wer
value: 20.85
- name: Dev CER (+LM)
type: cer
value: 11.01
---
# Wav2Vec2-Large-XLSR-53-English
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on English using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-english")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "en"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-english"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| "SHE'LL BE ALL RIGHT." | SHE'LL BE ALL RIGHT |
| SIX | SIX |
| "ALL'S WELL THAT ENDS WELL." | ALL AS WELL THAT ENDS WELL |
| DO YOU MEAN IT? | DO YOU MEAN IT |
| THE NEW PATCH IS LESS INVASIVE THAN THE OLD ONE, BUT STILL CAUSES REGRESSIONS. | THE NEW PATCH IS LESS INVASIVE THAN THE OLD ONE BUT STILL CAUSES REGRESSION |
| HOW IS MOZILLA GOING TO HANDLE AMBIGUITIES LIKE QUEUE AND CUE? | HOW IS MOSLILLAR GOING TO HANDLE ANDBEWOOTH HIS LIKE Q AND Q |
| "I GUESS YOU MUST THINK I'M KINDA BATTY." | RUSTIAN WASTIN PAN ONTE BATTLY |
| NO ONE NEAR THE REMOTE MACHINE YOU COULD RING? | NO ONE NEAR THE REMOTE MACHINE YOU COULD RING |
| SAUCE FOR THE GOOSE IS SAUCE FOR THE GANDER. | SAUCE FOR THE GUICE IS SAUCE FOR THE GONDER |
| GROVES STARTED WRITING SONGS WHEN SHE WAS FOUR YEARS OLD. | GRAFS STARTED WRITING SONGS WHEN SHE WAS FOUR YEARS OLD |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-english --dataset mozilla-foundation/common_voice_6_0 --config en --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-english --dataset speech-recognition-community-v2/dev_data --config en --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021wav2vec2-large-xlsr-53-english,
title={XLSR Wav2Vec2 English by Jonatas Grosman},
author={Grosman, Jonatas},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english}},
year={2021}
}
```
|
infinitejoy/wav2vec2-large-xls-r-300m-kurdish
|
infinitejoy
| 2022-03-23T18:33:23Z | 98 | 4 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"kmr",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- kmr
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- kmr
- model_for_talk
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Kurmanji Kurdish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: kmr
metrics:
- name: Test WER
type: wer
value: 102.308
- name: Test CER
type: cer
value: 538.748
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-kurdish
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - KMR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2548
- Wer: 0.2688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.3161 | 12.27 | 2000 | 0.4199 | 0.4797 |
| 1.0643 | 24.54 | 4000 | 0.2982 | 0.3721 |
| 0.9718 | 36.81 | 6000 | 0.2762 | 0.3333 |
| 0.8772 | 49.08 | 8000 | 0.2586 | 0.3051 |
| 0.8236 | 61.35 | 10000 | 0.2575 | 0.2865 |
| 0.7745 | 73.62 | 12000 | 0.2603 | 0.2816 |
| 0.7297 | 85.89 | 14000 | 0.2539 | 0.2727 |
| 0.7079 | 98.16 | 16000 | 0.2554 | 0.2681 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
shivam/wav2vec2-xls-r-hindi
|
shivam
| 2022-03-23T18:33:12Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"hi",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- hi
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
metrics:
- wer
- cer
model-index:
- name: shivam/wav2vec2-xls-r-hindi
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice Corpus 7.0
type: mozilla-foundation/common_voice_7_0
args: hi
metrics:
- name: Test WER
type: wer
value: 52.3
- name: Test CER
type: cer
value: 26.09
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2282
- Wer: 0.6838
## Evaluation results on Common Voice 7 "test" (Running ./eval.py):
### With LM
- WER: 52.30
- CER: 26.09
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.3155 | 3.4 | 500 | 4.5582 | 1.0 |
| 3.3369 | 6.8 | 1000 | 3.4269 | 1.0 |
| 2.1785 | 10.2 | 1500 | 1.7191 | 0.8831 |
| 1.579 | 13.6 | 2000 | 1.3604 | 0.7647 |
| 1.3773 | 17.01 | 2500 | 1.2737 | 0.7519 |
| 1.3165 | 20.41 | 3000 | 1.2457 | 0.7401 |
| 1.2274 | 23.81 | 3500 | 1.3617 | 0.7301 |
| 1.1787 | 27.21 | 4000 | 1.2068 | 0.7010 |
| 1.1467 | 30.61 | 4500 | 1.2416 | 0.6946 |
| 1.0801 | 34.01 | 5000 | 1.2312 | 0.6990 |
| 1.0709 | 37.41 | 5500 | 1.2984 | 0.7138 |
| 1.0307 | 40.81 | 6000 | 1.2049 | 0.6871 |
| 1.0003 | 44.22 | 6500 | 1.1956 | 0.6841 |
| 1.004 | 47.62 | 7000 | 1.2101 | 0.6793 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
|
sammy786/wav2vec2-xlsr-romansh_vallader
|
sammy786
| 2022-03-23T18:33:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"rm-vallader",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- rm-vallader
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- rm-vallader
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-romansh_vallader
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: rm-vallader
metrics:
- name: Test WER
type: wer
value: 28.54
- name: Test CER
type: cer
value: 6.57
---
# sammy786/wav2vec2-xlsr-romansh_vallader
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - rm-vallader dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 30.31
- Wer: 26.32
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 200 | 5.895100 | 3.136624 | 0.999713 |
| 400 | 1.545700 | 0.445069 | 0.471584 |
| 600 | 0.693900 | 0.340700 | 0.363088 |
| 800 | 0.510600 | 0.295432 | 0.289610 |
| 1000 | 0.318800 | 0.286795 | 0.281860 |
| 1200 | 0.194000 | 0.307468 | 0.274110 |
| 1400 | 0.151800 | 0.304849 | 0.264351 |
| 1600 | 0.148300 | 0.303112 | 0.263203 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-romansh_vallader --dataset mozilla-foundation/common_voice_8_0 --config rm-vallader --split test
```
|
sammy786/wav2vec2-xlsr-breton
|
sammy786
| 2022-03-23T18:33:06Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"br",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- br
license: apache-2.0
tags:
- automatic-speech-recognition
- br
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-breton
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: br
metrics:
- name: Test WER
type: wer
value: 48.2
- name: Test CER
type: cer
value: 15.02
---
# sammy786/wav2vec2-xlsr-breton
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - br dataset.
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 8
- eval_batch_size: 32
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-breton --dataset mozilla-foundation/common_voice_8_0 --config br --split test
```
|
samitizerxu/wav2vec2-xls-r-300m-fr
|
samitizerxu
| 2022-03-23T18:33:04Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"fr",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- fr
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-cls-r-300m-fr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: fr
metrics:
- name: Test WER
type: wer
value: 56.62
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: fr
metrics:
- name: Test WER
type: wer
value: 58.22
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-cls-r-300m-fr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - FR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6521
- Wer: 0.4330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.6773 | 0.8 | 500 | 1.3907 | 0.9864 |
| 0.9526 | 1.6 | 1000 | 0.7760 | 0.6448 |
| 0.6418 | 2.4 | 1500 | 0.7605 | 0.6194 |
| 0.5028 | 3.2 | 2000 | 0.6516 | 0.5322 |
| 0.4133 | 4.0 | 2500 | 0.6303 | 0.5097 |
| 0.3285 | 4.8 | 3000 | 0.6422 | 0.5062 |
| 0.2764 | 5.6 | 3500 | 0.5936 | 0.4748 |
| 0.2361 | 6.4 | 4000 | 0.6486 | 0.4683 |
| 0.2049 | 7.2 | 4500 | 0.6321 | 0.4532 |
| 0.176 | 8.0 | 5000 | 0.6230 | 0.4482 |
| 0.1393 | 8.8 | 5500 | 0.6595 | 0.4403 |
| 0.1141 | 9.6 | 6000 | 0.6552 | 0.4348 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-breton
|
infinitejoy
| 2022-03-23T18:33:01Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"br",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- br
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Breton
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: br
metrics:
- name: Test WER
type: wer
value: 107.955
- name: Test CER
type: cer
value: 379.33
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-breton
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6102
- Wer: 0.4455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9205 | 3.33 | 500 | 2.8659 | 1.0 |
| 1.6403 | 6.67 | 1000 | 0.9440 | 0.7593 |
| 1.3483 | 10.0 | 1500 | 0.7580 | 0.6215 |
| 1.2255 | 13.33 | 2000 | 0.6851 | 0.5722 |
| 1.1139 | 16.67 | 2500 | 0.6409 | 0.5220 |
| 1.0688 | 20.0 | 3000 | 0.6245 | 0.5055 |
| 0.99 | 23.33 | 3500 | 0.6142 | 0.4874 |
| 0.9345 | 26.67 | 4000 | 0.5946 | 0.4829 |
| 0.9058 | 30.0 | 4500 | 0.6229 | 0.4704 |
| 0.8683 | 33.33 | 5000 | 0.6153 | 0.4666 |
| 0.8367 | 36.67 | 5500 | 0.5952 | 0.4542 |
| 0.8162 | 40.0 | 6000 | 0.6030 | 0.4541 |
| 0.8042 | 43.33 | 6500 | 0.5972 | 0.4485 |
| 0.7836 | 46.67 | 7000 | 0.6070 | 0.4497 |
| 0.7556 | 50.0 | 7500 | 0.6102 | 0.4455 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-basaa-cv8
|
infinitejoy
| 2022-03-23T18:32:58Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"bas",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- bas
license: apache-2.0
tags:
- automatic-speech-recognition
- bas
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Basaa
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: bas
metrics:
- name: Test WER
type: wer
value: 38.057
- name: Test CER
type: cer
value: 11.233
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-basaa-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BAS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4648
- Wer: 0.5472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9421 | 12.82 | 500 | 2.8894 | 1.0 |
| 1.1872 | 25.64 | 1000 | 0.6688 | 0.7460 |
| 0.8894 | 38.46 | 1500 | 0.4868 | 0.6516 |
| 0.769 | 51.28 | 2000 | 0.4960 | 0.6507 |
| 0.6936 | 64.1 | 2500 | 0.4781 | 0.5384 |
| 0.624 | 76.92 | 3000 | 0.4643 | 0.5430 |
| 0.5966 | 89.74 | 3500 | 0.4530 | 0.5591 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-assamese-cv8
|
infinitejoy
| 2022-03-23T18:32:56Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"as",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- as
license: apache-2.0
tags:
- as
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Assamese
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: as
metrics:
- name: Test WER
type: wer
value: 65.966
- name: Test CER
type: cer
value: 22.188
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-assamese-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9814
- Wer: 0.7402
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 20.0 | 400 | 3.1447 | 1.0 |
| No log | 40.0 | 800 | 1.0074 | 0.8556 |
| 3.1278 | 60.0 | 1200 | 0.9507 | 0.7711 |
| 3.1278 | 80.0 | 1600 | 0.9730 | 0.7630 |
| 0.8247 | 100.0 | 2000 | 0.9814 | 0.7402 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
emre/wav2vec2-xls-r-300m-Tr-med-CommonVoice8
|
emre
| 2022-03-23T18:32:53Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language: tr
tags:
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-Tr-med-CommonVoice8
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 49.14
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Tr-med-CommonVoice8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2556
- Wer: 0.4914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.4876 | 6.66 | 5000 | 0.3252 | 0.5784 |
| 0.6919 | 13.32 | 10000 | 0.2720 | 0.5172 |
| 0.5919 | 19.97 | 15000 | 0.2556 | 0.4914 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
comodoro/wav2vec2-xls-r-300m-cs
|
comodoro
| 2022-03-23T18:32:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"xlsr-fine-tuning-week",
"cs",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- cs
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
- xlsr-fine-tuning-week
datasets:
- common_voice
model-index:
- name: Czech comodoro Wav2Vec2 XLSR 300M CV6.1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6.1
type: common_voice
args: cs
metrics:
- name: Test WER
type: wer
value: 22.2
- name: Test CER
type: cer
value: 5.1
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: cs
metrics:
- name: Test WER
type: wer
value: 66.78
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: cs
metrics:
- name: Test WER
type: wer
value: 57.52
---
# Wav2Vec2-Large-XLSR-53-Czech
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "cs", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs")
model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Czech test data of Common Voice 6.1
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "cs", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs")
model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\/\"\“\„\%\”\�\–\'\`\«\»\—\’\…]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 22.20 %
## Training
The Common Voice `train` and `validation` datasets were used for training
# TODO The script used for training can be found [here](...)
|
anuragshas/wav2vec2-large-xls-r-300m-as
|
anuragshas
| 2022-03-23T18:32:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"robust-speech-event",
"as",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- as
license: apache-2.0
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-as
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_7_0
name: Common Voice 7
args: as
metrics:
- type: wer
value: 56.995
name: Test WER
- name: Test CER
type: cer
value: 20.39
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-as
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9068
- Wer: 0.6679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.12
- num_epochs: 240
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 5.7027 | 21.05 | 400 | 3.4157 | 1.0 |
| 1.1638 | 42.1 | 800 | 1.3498 | 0.7461 |
| 0.2266 | 63.15 | 1200 | 1.6147 | 0.7273 |
| 0.1473 | 84.21 | 1600 | 1.6649 | 0.7108 |
| 0.1043 | 105.26 | 2000 | 1.7691 | 0.7090 |
| 0.0779 | 126.31 | 2400 | 1.8300 | 0.7009 |
| 0.0613 | 147.36 | 2800 | 1.8681 | 0.6916 |
| 0.0471 | 168.41 | 3200 | 1.8567 | 0.6875 |
| 0.0343 | 189.46 | 3600 | 1.9054 | 0.6840 |
| 0.0265 | 210.51 | 4000 | 1.9020 | 0.6786 |
| 0.0219 | 231.56 | 4400 | 1.9068 | 0.6679 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-as --dataset mozilla-foundation/common_voice_7_0 --config as --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-as"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "as", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "জাহাজত তো তিশকুৰলৈ যাব কিন্তু জহাজিটো আহিপনে"
```
### Eval results on Common Voice 7 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 67 | 56.995 |
|
sammy786/wav2vec2-xlsr-tatar
|
sammy786
| 2022-03-23T18:32:40Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"tt",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- tt
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- tt
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-tatar
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: tt
metrics:
- name: Test WER
type: wer
value: 16.87
- name: Test CER
type: cer
value: 3.64
---
# sammy786/wav2vec2-xlsr-tatar
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - tt dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 7.66
- Wer: 7.08
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|-------|---------------|-----------------|----------|
| 200 | 4.849400 | 1.874908 | 0.995232 |
| 400 | 1.105700 | 0.257292 | 0.367658 |
| 600 | 0.723000 | 0.181150 | 0.250513 |
| 800 | 0.660600 | 0.167009 | 0.226078 |
| 1000 | 0.568000 | 0.135090 | 0.177339 |
| 1200 | 0.721200 | 0.117469 | 0.166413 |
| 1400 | 0.416300 | 0.115142 | 0.153765 |
| 1600 | 0.346000 | 0.105782 | 0.153963 |
| 1800 | 0.279700 | 0.102452 | 0.146149 |
| 2000 | 0.273800 | 0.095818 | 0.128468 |
| 2200 | 0.252900 | 0.102302 | 0.133766 |
| 2400 | 0.255100 | 0.096592 | 0.121316 |
| 2600 | 0.229600 | 0.091263 | 0.124561 |
| 2800 | 0.213900 | 0.097748 | 0.125687 |
| 3000 | 0.210700 | 0.091244 | 0.125422 |
| 3200 | 0.202600 | 0.084076 | 0.106284 |
| 3400 | 0.200900 | 0.093809 | 0.113238 |
| 3600 | 0.192700 | 0.082918 | 0.108139 |
| 3800 | 0.182000 | 0.084487 | 0.103371 |
| 4000 | 0.167700 | 0.091847 | 0.104960 |
| 4200 | 0.183700 | 0.085223 | 0.103040 |
| 4400 | 0.174400 | 0.083862 | 0.100589 |
| 4600 | 0.163100 | 0.086493 | 0.099728 |
| 4800 | 0.162000 | 0.081734 | 0.097543 |
| 5000 | 0.153600 | 0.077223 | 0.092974 |
| 5200 | 0.153700 | 0.086217 | 0.090789 |
| 5400 | 0.140200 | 0.093256 | 0.100457 |
| 5600 | 0.142900 | 0.086903 | 0.097742 |
| 5800 | 0.131400 | 0.083068 | 0.095225 |
| 6000 | 0.126000 | 0.086642 | 0.091252 |
| 6200 | 0.135300 | 0.083387 | 0.091186 |
| 6400 | 0.126100 | 0.076479 | 0.086352 |
| 6600 | 0.127100 | 0.077868 | 0.086153 |
| 6800 | 0.118000 | 0.083878 | 0.087676 |
| 7000 | 0.117600 | 0.085779 | 0.091054 |
| 7200 | 0.113600 | 0.084197 | 0.084233 |
| 7400 | 0.112000 | 0.078688 | 0.081319 |
| 7600 | 0.110200 | 0.082534 | 0.086087 |
| 7800 | 0.106400 | 0.077245 | 0.080988 |
| 8000 | 0.102300 | 0.077497 | 0.079332 |
| 8200 | 0.109500 | 0.079083 | 0.088339 |
| 8400 | 0.095900 | 0.079721 | 0.077809 |
| 8600 | 0.094700 | 0.079078 | 0.079730 |
| 8800 | 0.097400 | 0.078785 | 0.079200 |
| 9000 | 0.093200 | 0.077445 | 0.077015 |
| 9200 | 0.088700 | 0.078207 | 0.076617 |
| 9400 | 0.087200 | 0.078982 | 0.076485 |
| 9600 | 0.089900 | 0.081209 | 0.076021 |
| 9800 | 0.081900 | 0.078158 | 0.075757 |
| 10000 | 0.080200 | 0.078074 | 0.074498 |
| 10200 | 0.085000 | 0.078830 | 0.073373 |
| 10400 | 0.080400 | 0.078144 | 0.073373 |
| 10600 | 0.078200 | 0.077163 | 0.073902 |
| 10800 | 0.080900 | 0.076394 | 0.072446 |
| 11000 | 0.080700 | 0.075955 | 0.071585 |
| 11200 | 0.076800 | 0.077031 | 0.072313 |
| 11400 | 0.076300 | 0.077401 | 0.072777 |
| 11600 | 0.076700 | 0.076613 | 0.071916 |
| 11800 | 0.076000 | 0.076672 | 0.071916 |
| 12000 | 0.077200 | 0.076490 | 0.070989 |
| 12200 | 0.076200 | 0.076688 | 0.070856 |
| 12400 | 0.074400 | 0.076780 | 0.071055 |
| 12600 | 0.076300 | 0.076768 | 0.071320 |
| 12800 | 0.077600 | 0.076727 | 0.071055 |
| 13000 | 0.077700 | 0.076714 | 0.071254 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-tatar --dataset mozilla-foundation/common_voice_8_0 --config tt --split test
```
|
huggingtweets/mattiasinspace
|
huggingtweets
| 2022-03-23T18:30:31Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T18:30:21Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1434246328788398081/M7Httz0A_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mattias in Deep</div>
<div style="text-align: center; font-size: 14px;">@mattiasinspace</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mattias in Deep.
| Data | Mattias in Deep |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 26 |
| Short tweets | 196 |
| Tweets kept | 3027 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2r9u5eoz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mattiasinspace's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ua0ungm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ua0ungm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mattiasinspace')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sammy786/wav2vec2-xlsr-mongolian
|
sammy786
| 2022-03-23T18:30:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mn",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- mn
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mn
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-mongolian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: mn
metrics:
- name: Test WER
type: wer
value: 32.63
- name: Test CER
type: cer
value: 9.26
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: mn
metrics:
- name: Test WER
type: wer
value: 91.26
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: mn
metrics:
- name: Test WER
type: wer
value: 91.37
---
# sammy786/wav2vec2-xlsr-mongolian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - mn dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 31.52
- Wer: 34.1522
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|:----:|:-------------:|:---------------:|:--------:|
| 200 | 4.906200 | 3.012986 | 1.000000 |
| 400 | 1.734600 | 0.704821 | 0.750497 |
| 600 | 1.132100 | 0.496223 | 0.531241 |
| 800 | 0.929300 | 0.468937 | 0.469043 |
| 1000 | 0.772300 | 0.425313 | 0.448168 |
| 1200 | 0.623900 | 0.394633 | 0.414229 |
| 1400 | 0.512400 | 0.369225 | 0.397614 |
| 1600 | 0.439900 | 0.346033 | 0.391650 |
| 1800 | 0.391300 | 0.358454 | 0.379296 |
| 2000 | 0.377000 | 0.346822 | 0.359415 |
| 2200 | 0.347500 | 0.325205 | 0.348481 |
| 2400 | 0.343600 | 0.315233 | 0.344078 |
| 2600 | 0.328000 | 0.308826 | 0.341522 |
| 2800 | 0.358200 | 0.331786 | 0.343084 |
| 3000 | 0.417200 | 0.370051 | 0.356433 |
| 3200 | 0.685300 | 0.595438 | 0.407413 |
| 3400 | 0.764100 | 0.643449 | 0.359983 |
| 3600 | 0.717100 | 0.505033 | 0.371911 |
| 3800 | 0.620900 | 0.464138 | 0.369071 |
| 4000 | 0.590700 | 0.445417 | 0.363249 |
| 4200 | 0.561000 | 0.440727 | 0.360267 |
| 4400 | 0.550600 | 0.447122 | 0.360267 |
| 4600 | 0.562100 | 0.457020 | 0.359841 |
| 4800 | 0.578800 | 0.470477 | 0.360551 |
| 5000 | 0.580400 | 0.481413 | 0.362539 |
| 5200 | 0.605500 | 0.485240 | 0.362823 |
| 5400 | 0.582900 | 0.486654 | 0.362965 |
| 5600 | 0.593900 | 0.486715 | 0.363107 |
| 5800 | 0.590900 | 0.486716 | 0.363107 |
| 6000 | 0.587200 | 0.486716 | 0.363107 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-mongolian --dataset mozilla-foundation/common_voice_8_0 --config mn --split test
```
|
infinitejoy/wav2vec2-large-xls-r-300m-bashkir
|
infinitejoy
| 2022-03-23T18:30:18Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"ba",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ba
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Bashkir
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: ba
metrics:
- name: Test WER
type: wer
value: 24.2
- name: Test CER
type: cer
value: 5.08
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bashkir
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1892
- Wer: 0.2421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.4792 | 0.5 | 2000 | 0.4598 | 0.5404 |
| 1.449 | 1.0 | 4000 | 0.4650 | 0.5610 |
| 1.3742 | 1.49 | 6000 | 0.4001 | 0.4977 |
| 1.3375 | 1.99 | 8000 | 0.3916 | 0.4894 |
| 1.2961 | 2.49 | 10000 | 0.3641 | 0.4569 |
| 1.2714 | 2.99 | 12000 | 0.3491 | 0.4488 |
| 1.2399 | 3.48 | 14000 | 0.3151 | 0.3986 |
| 1.2067 | 3.98 | 16000 | 0.3081 | 0.3923 |
| 1.1842 | 4.48 | 18000 | 0.2875 | 0.3703 |
| 1.1644 | 4.98 | 20000 | 0.2840 | 0.3670 |
| 1.161 | 5.48 | 22000 | 0.2790 | 0.3597 |
| 1.1303 | 5.97 | 24000 | 0.2552 | 0.3272 |
| 1.0874 | 6.47 | 26000 | 0.2405 | 0.3142 |
| 1.0613 | 6.97 | 28000 | 0.2352 | 0.3055 |
| 1.0498 | 7.47 | 30000 | 0.2249 | 0.2910 |
| 1.021 | 7.96 | 32000 | 0.2118 | 0.2752 |
| 1.0002 | 8.46 | 34000 | 0.2046 | 0.2662 |
| 0.9762 | 8.96 | 36000 | 0.1969 | 0.2530 |
| 0.9568 | 9.46 | 38000 | 0.1917 | 0.2449 |
| 0.953 | 9.96 | 40000 | 0.1893 | 0.2425 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
arampacha/wav2vec2-xls-r-1b-uk-cv
|
arampacha
| 2022-03-23T18:30:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"uk",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- uk
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-1b-hy-cv
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice uk
args: uk
metrics:
- type: wer
value: 12.246920571994902
name: WER LM
- type: cer
value: 2.513653497966816
name: CER LM
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: uk
metrics:
- name: Test WER
type: wer
value: 46.56
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: uk
metrics:
- name: Test WER
type: wer
value: 35.98
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UK dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1747
- Wer: 0.2107
- Cer: 0.0408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.3719 | 4.35 | 500 | 0.3389 | 0.4236 | 0.0833 |
| 1.1361 | 8.7 | 1000 | 0.2309 | 0.3162 | 0.0630 |
| 1.0517 | 13.04 | 1500 | 0.2166 | 0.3056 | 0.0597 |
| 1.0118 | 17.39 | 2000 | 0.2141 | 0.2784 | 0.0557 |
| 0.9922 | 21.74 | 2500 | 0.2231 | 0.2941 | 0.0594 |
| 0.9929 | 26.09 | 3000 | 0.2171 | 0.2892 | 0.0587 |
| 0.9485 | 30.43 | 3500 | 0.2236 | 0.2956 | 0.0599 |
| 0.9573 | 34.78 | 4000 | 0.2314 | 0.3043 | 0.0616 |
| 0.9195 | 39.13 | 4500 | 0.2169 | 0.2812 | 0.0580 |
| 0.8915 | 43.48 | 5000 | 0.2109 | 0.2780 | 0.0560 |
| 0.8449 | 47.83 | 5500 | 0.2050 | 0.2534 | 0.0514 |
| 0.8028 | 52.17 | 6000 | 0.2032 | 0.2456 | 0.0492 |
| 0.7881 | 56.52 | 6500 | 0.1890 | 0.2380 | 0.0469 |
| 0.7423 | 60.87 | 7000 | 0.1816 | 0.2245 | 0.0442 |
| 0.7248 | 65.22 | 7500 | 0.1789 | 0.2165 | 0.0422 |
| 0.6993 | 69.57 | 8000 | 0.1747 | 0.2107 | 0.0408 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2
|
DrishtiSharma
| 2022-03-23T18:30:10Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"bg",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- bg
license: apache-2.0
tags:
- automatic-speech-recognition
- bg
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-bg-d2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: bg
metrics:
- name: Test WER
type: wer
value: 0.28775471338792613
- name: Test CER
type: cer
value: 0.06861971204625049
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: bg
metrics:
- name: Test WER
type: wer
value: 0.49783147459727384
- name: Test CER
type: cer
value: 0.1591062599627158
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: bg
metrics:
- name: Test WER
type: wer
value: 51.25
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bg-d2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3421
- Wer: 0.2860
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 --dataset mozilla-foundation/common_voice_8_0 --config bg --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 --dataset speech-recognition-community-v2/dev_data --config bg --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 700
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.8791 | 1.74 | 200 | 3.1902 | 1.0 |
| 3.0441 | 3.48 | 400 | 2.8098 | 0.9864 |
| 1.1499 | 5.22 | 600 | 0.4668 | 0.5014 |
| 0.4968 | 6.96 | 800 | 0.4162 | 0.4472 |
| 0.3553 | 8.7 | 1000 | 0.3580 | 0.3777 |
| 0.3027 | 10.43 | 1200 | 0.3422 | 0.3506 |
| 0.2562 | 12.17 | 1400 | 0.3556 | 0.3639 |
| 0.2272 | 13.91 | 1600 | 0.3621 | 0.3583 |
| 0.2125 | 15.65 | 1800 | 0.3436 | 0.3358 |
| 0.1904 | 17.39 | 2000 | 0.3650 | 0.3545 |
| 0.1695 | 19.13 | 2200 | 0.3366 | 0.3241 |
| 0.1532 | 20.87 | 2400 | 0.3550 | 0.3311 |
| 0.1453 | 22.61 | 2600 | 0.3582 | 0.3131 |
| 0.1359 | 24.35 | 2800 | 0.3524 | 0.3084 |
| 0.1233 | 26.09 | 3000 | 0.3503 | 0.2973 |
| 0.1114 | 27.83 | 3200 | 0.3434 | 0.2946 |
| 0.1051 | 29.57 | 3400 | 0.3474 | 0.2956 |
| 0.0965 | 31.3 | 3600 | 0.3426 | 0.2907 |
| 0.0923 | 33.04 | 3800 | 0.3478 | 0.2894 |
| 0.0894 | 34.78 | 4000 | 0.3421 | 0.2860 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jsnfly/wav2vec2-large-xlsr-53-german-gpt2
|
jsnfly
| 2022-03-23T18:29:57Z | 21 | 2 |
transformers
|
[
"transformers",
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"de",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: Wav2Vec2-Large-XLSR-53-German-GPT2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: de
metrics:
- name: Test WER
type: wer
value: 10.02
- name: Test CER
type: cer
value: 4.7
---
# Wav2Vec2-Large-XLSR-53-German-GPT2
This is an encoder-decoder model for automatic speech recognition trained on on the
MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - DE dataset. The encoder was initialized from
[jonatasgrosman/wav2vec2-large-xlsr-53-german](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german) and
the decoder from [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2).
It was trained using a two step process:
* fine-tuning only the cross-attention weights and the decoder using the pre-computed outputs of the Wav2Vec-Modell
* relatively fast training
* also works on small GPU (eg. 8 GB)
* but may take a lot of disk space
* should already yield decent results
* fine-tuning the model end-to-end
* much slower
* needs a bigger GPU
There is also one trick, which seemed to improve performance significantly: adding position embeddings to the
encoder outputs and initializing them with the pre-trained position embeddings of the GPT2 model (See `eval.py`).
The training notebooks are still early drafts. Also results can probably improved a lot by using for example a learning
rate schedule.
|
RuudVelo/wav2vec2-large-xls-r-300m-nl
|
RuudVelo
| 2022-03-23T18:29:49Z | 13 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"nl",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- nl
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- nl
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-nl
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
args: nl
metrics:
- name: Test WER
type: wer
value: 17.17
- name: Test CER
type: cer
value: 5.13
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: nl
metrics:
- name: Test WER
type: wer
value: 35.76
- name: Test CER
type: cer
value: 13.99
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: nl
metrics:
- name: Test WER
type: wer
value: 37.19
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-nl
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the test set:
- Loss: 0.3923
- Wer: 0.1748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.5787 | 0.89 | 400 | 0.6354 | 0.5643 |
| 0.3036 | 1.78 | 800 | 0.3690 | 0.3552 |
| 0.188 | 2.67 | 1200 | 0.3239 | 0.2958 |
| 0.1434 | 3.56 | 1600 | 0.3093 | 0.2515 |
| 0.1245 | 4.44 | 2000 | 0.3024 | 0.2433 |
| 0.1095 | 5.33 | 2400 | 0.3249 | 0.2643 |
| 0.0979 | 6.22 | 2800 | 0.3191 | 0.2281 |
| 0.0915 | 7.11 | 3200 | 0.3152 | 0.2216 |
| 0.0829 | 8.0 | 3600 | 0.3419 | 0.2218 |
| 0.0777 | 8.89 | 4000 | 0.3432 | 0.2132 |
| 0.073 | 9.78 | 4400 | 0.3223 | 0.2131 |
| 0.0688 | 10.67 | 4800 | 0.3094 | 0.2152 |
| 0.0647 | 11.56 | 5200 | 0.3411 | 0.2152 |
| 0.0639 | 12.44 | 5600 | 0.3762 | 0.2135 |
| 0.0599 | 13.33 | 6000 | 0.3790 | 0.2137 |
| 0.0572 | 14.22 | 6400 | 0.3693 | 0.2118 |
| 0.0563 | 15.11 | 6800 | 0.3495 | 0.2139 |
| 0.0521 | 16.0 | 7200 | 0.3800 | 0.2023 |
| 0.0508 | 16.89 | 7600 | 0.3678 | 0.2033 |
| 0.0513 | 17.78 | 8000 | 0.3845 | 0.1987 |
| 0.0476 | 18.67 | 8400 | 0.3511 | 0.2037 |
| 0.045 | 19.56 | 8800 | 0.3794 | 0.1994 |
| 0.044 | 20.44 | 9200 | 0.3525 | 0.2050 |
| 0.043 | 21.33 | 9600 | 0.4082 | 0.2007 |
| 0.0409 | 22.22 | 10000 | 0.3866 | 0.2004 |
| 0.0393 | 23.11 | 10400 | 0.3899 | 0.2008 |
| 0.0382 | 24.0 | 10800 | 0.3626 | 0.1951 |
| 0.039 | 24.89 | 11200 | 0.3936 | 0.1953 |
| 0.0361 | 25.78 | 11600 | 0.4262 | 0.1928 |
| 0.0362 | 26.67 | 12000 | 0.3796 | 0.1934 |
| 0.033 | 27.56 | 12400 | 0.3616 | 0.1934 |
| 0.0321 | 28.44 | 12800 | 0.3742 | 0.1933 |
| 0.0325 | 29.33 | 13200 | 0.3582 | 0.1869 |
| 0.0309 | 30.22 | 13600 | 0.3717 | 0.1874 |
| 0.029 | 31.11 | 14000 | 0.3814 | 0.1894 |
| 0.0296 | 32.0 | 14400 | 0.3698 | 0.1877 |
| 0.0281 | 32.89 | 14800 | 0.3976 | 0.1899 |
| 0.0275 | 33.78 | 15200 | 0.3854 | 0.1858 |
| 0.0264 | 34.67 | 15600 | 0.4021 | 0.1889 |
| 0.0261 | 35.56 | 16000 | 0.3850 | 0.1830 |
| 0.0242 | 36.44 | 16400 | 0.4091 | 0.1878 |
| 0.0245 | 37.33 | 16800 | 0.4012 | 0.1846 |
| 0.0243 | 38.22 | 17200 | 0.3996 | 0.1833 |
| 0.0223 | 39.11 | 17600 | 0.3962 | 0.1815 |
| 0.0223 | 40.0 | 18000 | 0.3898 | 0.1832 |
| 0.0219 | 40.89 | 18400 | 0.4019 | 0.1822 |
| 0.0211 | 41.78 | 18800 | 0.4035 | 0.1809 |
| 0.021 | 42.67 | 19200 | 0.3915 | 0.1826 |
| 0.0208 | 43.56 | 19600 | 0.3934 | 0.1784 |
| 0.0188 | 44.44 | 20000 | 0.3912 | 0.1787 |
| 0.0195 | 45.33 | 20400 | 0.3989 | 0.1766 |
| 0.0186 | 46.22 | 20800 | 0.3887 | 0.1773 |
| 0.0188 | 47.11 | 21200 | 0.3982 | 0.1758 |
| 0.0175 | 48.0 | 21600 | 0.3933 | 0.1755 |
| 0.0172 | 48.89 | 22000 | 0.3921 | 0.1749 |
| 0.0187 | 49.78 | 22400 | 0.3923 | 0.1748 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
ubamba98/wav2vec2-xls-r-1b-ro
|
ubamba98
| 2022-03-23T18:29:42Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"ro",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ro
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xls-r-1b-ro
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: ro
metrics:
- name: Test WER
type: wer
value: 99.99
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ro
metrics:
- name: Test WER
type: wer
value: 99.98
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ro
metrics:
- name: Test WER
type: wer
value: 99.99
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-ro
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - RO dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1113
- Wer: 0.4770
- Cer: 0.0306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 0.7844 | 1.67 | 1500 | 0.3412 | 0.8600 | 0.0940 |
| 0.7272 | 3.34 | 3000 | 0.1926 | 0.6409 | 0.0527 |
| 0.6924 | 5.02 | 4500 | 0.1413 | 0.5722 | 0.0401 |
| 0.6327 | 6.69 | 6000 | 0.1252 | 0.5366 | 0.0371 |
| 0.6363 | 8.36 | 7500 | 0.1235 | 0.5741 | 0.0389 |
| 0.6238 | 10.03 | 9000 | 0.1180 | 0.5542 | 0.0362 |
| 0.6018 | 11.71 | 10500 | 0.1192 | 0.5694 | 0.0369 |
| 0.583 | 13.38 | 12000 | 0.1216 | 0.5772 | 0.0385 |
| 0.5643 | 15.05 | 13500 | 0.1195 | 0.5419 | 0.0371 |
| 0.5399 | 16.72 | 15000 | 0.1240 | 0.5224 | 0.0370 |
| 0.5529 | 18.39 | 16500 | 0.1174 | 0.5555 | 0.0367 |
| 0.5246 | 20.07 | 18000 | 0.1097 | 0.5047 | 0.0339 |
| 0.4936 | 21.74 | 19500 | 0.1225 | 0.5189 | 0.0382 |
| 0.4629 | 23.41 | 21000 | 0.1142 | 0.5047 | 0.0344 |
| 0.4463 | 25.08 | 22500 | 0.1168 | 0.4887 | 0.0339 |
| 0.4671 | 26.76 | 24000 | 0.1119 | 0.5073 | 0.0338 |
| 0.4359 | 28.43 | 25500 | 0.1206 | 0.5479 | 0.0363 |
| 0.4225 | 30.1 | 27000 | 0.1122 | 0.5170 | 0.0345 |
| 0.4038 | 31.77 | 28500 | 0.1159 | 0.5032 | 0.0343 |
| 0.4271 | 33.44 | 30000 | 0.1116 | 0.5126 | 0.0339 |
| 0.3867 | 35.12 | 31500 | 0.1101 | 0.4937 | 0.0327 |
| 0.3674 | 36.79 | 33000 | 0.1142 | 0.4940 | 0.0330 |
| 0.3607 | 38.46 | 34500 | 0.1106 | 0.5145 | 0.0327 |
| 0.3651 | 40.13 | 36000 | 0.1172 | 0.4921 | 0.0317 |
| 0.3268 | 41.81 | 37500 | 0.1093 | 0.4830 | 0.0310 |
| 0.3345 | 43.48 | 39000 | 0.1131 | 0.4760 | 0.0314 |
| 0.3236 | 45.15 | 40500 | 0.1132 | 0.4864 | 0.0317 |
| 0.312 | 46.82 | 42000 | 0.1124 | 0.4861 | 0.0315 |
| 0.3106 | 48.49 | 43500 | 0.1116 | 0.4745 | 0.0306 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
cahya/xls-r-ab-test
|
cahya
| 2022-03-23T18:29:37Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ab",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ab
tags:
- ab
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 135.4675
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.10.3
|
shivam/xls-r-300m-marathi
|
shivam
| 2022-03-23T18:29:32Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"mr",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- mr
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- mr
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: ''
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice Corpus 8.0
type: mozilla-foundation/common_voice_8_0
args: mr
metrics:
- name: Test WER
type: wer
value: 38.27
- name: Test CER
type: cer
value: 8.91
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset.
It achieves the following results on the mozilla-foundation/common_voice_8_0 mr test set:
- Without LM
+ WER: 48.53
+ CER: 10.63
- With LM
+ WER: 38.27
+ CER: 8.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 400.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 4.2706 | 22.73 | 500 | 4.0174 | 1.0 |
| 3.2492 | 45.45 | 1000 | 3.2309 | 0.9908 |
| 1.9709 | 68.18 | 1500 | 1.0651 | 0.8440 |
| 1.4088 | 90.91 | 2000 | 0.5765 | 0.6550 |
| 1.1326 | 113.64 | 2500 | 0.4842 | 0.5760 |
| 0.9709 | 136.36 | 3000 | 0.4785 | 0.6013 |
| 0.8433 | 159.09 | 3500 | 0.5048 | 0.5419 |
| 0.7404 | 181.82 | 4000 | 0.5052 | 0.5339 |
| 0.6589 | 204.55 | 4500 | 0.5237 | 0.5897 |
| 0.5831 | 227.27 | 5000 | 0.5166 | 0.5447 |
| 0.5375 | 250.0 | 5500 | 0.5292 | 0.5487 |
| 0.4784 | 272.73 | 6000 | 0.5480 | 0.5596 |
| 0.4421 | 295.45 | 6500 | 0.5682 | 0.5467 |
| 0.4047 | 318.18 | 7000 | 0.5681 | 0.5447 |
| 0.3779 | 340.91 | 7500 | 0.5783 | 0.5347 |
| 0.3525 | 363.64 | 8000 | 0.5856 | 0.5367 |
| 0.3393 | 386.36 | 8500 | 0.5960 | 0.5359 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
|
anuragshas/wav2vec2-xls-r-300m-sl-cv8-with-lm
|
anuragshas
| 2022-03-23T18:29:27Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"sl",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- sl
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Slovenian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: sl
metrics:
- name: Test WER
type: wer
value: 12.736
- name: Test CER
type: cer
value: 3.605
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sl
metrics:
- name: Test WER
type: wer
value: 45.587
- name: Test CER
type: cer
value: 20.886
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: sl
metrics:
- name: Test WER
type: wer
value: 45.42
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Slovenian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2578
- Wer: 0.2273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1829 | 4.88 | 400 | 3.1228 | 1.0 |
| 2.8675 | 9.76 | 800 | 2.8616 | 0.9993 |
| 1.583 | 14.63 | 1200 | 0.6392 | 0.6239 |
| 1.1959 | 19.51 | 1600 | 0.3602 | 0.3651 |
| 1.0276 | 24.39 | 2000 | 0.3021 | 0.2981 |
| 0.9671 | 29.27 | 2400 | 0.2872 | 0.2739 |
| 0.873 | 34.15 | 2800 | 0.2593 | 0.2459 |
| 0.8513 | 39.02 | 3200 | 0.2617 | 0.2473 |
| 0.8132 | 43.9 | 3600 | 0.2548 | 0.2426 |
| 0.7935 | 48.78 | 4000 | 0.2637 | 0.2353 |
| 0.7565 | 53.66 | 4400 | 0.2629 | 0.2322 |
| 0.7359 | 58.54 | 4800 | 0.2579 | 0.2253 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-sl-cv8-with-lm --dataset mozilla-foundation/common_voice_8_0 --config sl --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-sl-cv8-with-lm --dataset speech-recognition-community-v2/dev_data --config sl --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-xls-r-300m-sl-cv8-with-lm"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "sl", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "zmago je divje od letel s helikopterjem visoko vzrak"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 19.938 | 12.736 |
|
anantoj/wav2vec2-xls-r-1b-korean
|
anantoj
| 2022-03-23T18:29:13Z | 37 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"ko",
"dataset:kresnik/zeroth_korean",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ko
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- kresnik/zeroth_korean
model-index:
- name: Wav2Vec2 XLS-R 1B Korean
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ko
metrics:
- name: Test WER
type: wer
value: 82.07
- name: Test CER
type: cer
value: 42.12
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ko
metrics:
- name: Test WER
type: wer
value: 82.09
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the KRESNIK/ZEROTH_KOREAN - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0639
- Wer: 0.0449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.603 | 0.72 | 500 | 4.6572 | 0.9985 |
| 2.6314 | 1.44 | 1000 | 2.0424 | 0.9256 |
| 2.2708 | 2.16 | 1500 | 0.9889 | 0.6989 |
| 2.1769 | 2.88 | 2000 | 0.8366 | 0.6312 |
| 2.1142 | 3.6 | 2500 | 0.7555 | 0.5998 |
| 2.0084 | 4.32 | 3000 | 0.7144 | 0.6003 |
| 1.9272 | 5.04 | 3500 | 0.6311 | 0.5461 |
| 1.8687 | 5.75 | 4000 | 0.6252 | 0.5430 |
| 1.8186 | 6.47 | 4500 | 0.5491 | 0.4988 |
| 1.7364 | 7.19 | 5000 | 0.5463 | 0.4959 |
| 1.6809 | 7.91 | 5500 | 0.4724 | 0.4484 |
| 1.641 | 8.63 | 6000 | 0.4679 | 0.4461 |
| 1.572 | 9.35 | 6500 | 0.4387 | 0.4236 |
| 1.5256 | 10.07 | 7000 | 0.3970 | 0.4003 |
| 1.5044 | 10.79 | 7500 | 0.3690 | 0.3893 |
| 1.4563 | 11.51 | 8000 | 0.3752 | 0.3875 |
| 1.394 | 12.23 | 8500 | 0.3386 | 0.3567 |
| 1.3641 | 12.95 | 9000 | 0.3290 | 0.3467 |
| 1.2878 | 13.67 | 9500 | 0.2893 | 0.3135 |
| 1.2602 | 14.39 | 10000 | 0.2723 | 0.3029 |
| 1.2302 | 15.11 | 10500 | 0.2603 | 0.2989 |
| 1.1865 | 15.83 | 11000 | 0.2440 | 0.2794 |
| 1.1491 | 16.55 | 11500 | 0.2500 | 0.2788 |
| 1.093 | 17.27 | 12000 | 0.2279 | 0.2629 |
| 1.0367 | 17.98 | 12500 | 0.2076 | 0.2443 |
| 0.9954 | 18.7 | 13000 | 0.1844 | 0.2259 |
| 0.99 | 19.42 | 13500 | 0.1794 | 0.2179 |
| 0.9385 | 20.14 | 14000 | 0.1765 | 0.2122 |
| 0.8952 | 20.86 | 14500 | 0.1706 | 0.1974 |
| 0.8841 | 21.58 | 15000 | 0.1791 | 0.1969 |
| 0.847 | 22.3 | 15500 | 0.1780 | 0.2060 |
| 0.8669 | 23.02 | 16000 | 0.1608 | 0.1862 |
| 0.8066 | 23.74 | 16500 | 0.1447 | 0.1626 |
| 0.7908 | 24.46 | 17000 | 0.1457 | 0.1655 |
| 0.7459 | 25.18 | 17500 | 0.1350 | 0.1445 |
| 0.7218 | 25.9 | 18000 | 0.1276 | 0.1421 |
| 0.703 | 26.62 | 18500 | 0.1177 | 0.1302 |
| 0.685 | 27.34 | 19000 | 0.1147 | 0.1305 |
| 0.6811 | 28.06 | 19500 | 0.1128 | 0.1244 |
| 0.6444 | 28.78 | 20000 | 0.1120 | 0.1213 |
| 0.6323 | 29.5 | 20500 | 0.1137 | 0.1166 |
| 0.5998 | 30.22 | 21000 | 0.1051 | 0.1107 |
| 0.5706 | 30.93 | 21500 | 0.1035 | 0.1037 |
| 0.5555 | 31.65 | 22000 | 0.1031 | 0.0927 |
| 0.5389 | 32.37 | 22500 | 0.0997 | 0.0900 |
| 0.5201 | 33.09 | 23000 | 0.0920 | 0.0912 |
| 0.5146 | 33.81 | 23500 | 0.0929 | 0.0947 |
| 0.515 | 34.53 | 24000 | 0.1000 | 0.0953 |
| 0.4743 | 35.25 | 24500 | 0.0922 | 0.0892 |
| 0.4707 | 35.97 | 25000 | 0.0852 | 0.0808 |
| 0.4456 | 36.69 | 25500 | 0.0855 | 0.0779 |
| 0.443 | 37.41 | 26000 | 0.0843 | 0.0738 |
| 0.4388 | 38.13 | 26500 | 0.0816 | 0.0699 |
| 0.4162 | 38.85 | 27000 | 0.0752 | 0.0645 |
| 0.3979 | 39.57 | 27500 | 0.0761 | 0.0621 |
| 0.3889 | 40.29 | 28000 | 0.0771 | 0.0625 |
| 0.3923 | 41.01 | 28500 | 0.0755 | 0.0598 |
| 0.3693 | 41.73 | 29000 | 0.0730 | 0.0578 |
| 0.3642 | 42.45 | 29500 | 0.0739 | 0.0598 |
| 0.3532 | 43.17 | 30000 | 0.0712 | 0.0553 |
| 0.3513 | 43.88 | 30500 | 0.0762 | 0.0516 |
| 0.3349 | 44.6 | 31000 | 0.0731 | 0.0504 |
| 0.3305 | 45.32 | 31500 | 0.0725 | 0.0507 |
| 0.3285 | 46.04 | 32000 | 0.0709 | 0.0489 |
| 0.3179 | 46.76 | 32500 | 0.0667 | 0.0467 |
| 0.3158 | 47.48 | 33000 | 0.0653 | 0.0494 |
| 0.3033 | 48.2 | 33500 | 0.0638 | 0.0456 |
| 0.3023 | 48.92 | 34000 | 0.0644 | 0.0464 |
| 0.2975 | 49.64 | 34500 | 0.0643 | 0.0455 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
|
samitizerxu/wav2vec2-xls-r-300m-eo
|
samitizerxu
| 2022-03-23T18:29:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"eo",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- eo
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- eo
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-eo
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: eo
metrics:
- name: Test WER
type: wer
value: 34.72
- name: Test CER
type: cer
value: 7.54
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-eo
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - EO dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2584
- Wer: 0.3114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.1701 | 0.8 | 500 | 2.8105 | 1.0 |
| 1.9143 | 1.6 | 1000 | 0.5977 | 0.7002 |
| 1.1259 | 2.4 | 1500 | 0.5063 | 0.6157 |
| 0.9732 | 3.2 | 2000 | 0.4264 | 0.5673 |
| 0.8983 | 4.0 | 2500 | 0.4249 | 0.4902 |
| 0.8507 | 4.8 | 3000 | 0.3811 | 0.4536 |
| 0.8064 | 5.6 | 3500 | 0.3643 | 0.4467 |
| 0.7866 | 6.4 | 4000 | 0.3600 | 0.4453 |
| 0.7773 | 7.2 | 4500 | 0.3724 | 0.4470 |
| 0.747 | 8.0 | 5000 | 0.3501 | 0.4189 |
| 0.7279 | 8.8 | 5500 | 0.3500 | 0.4261 |
| 0.7153 | 9.6 | 6000 | 0.3328 | 0.3966 |
| 0.7 | 10.4 | 6500 | 0.3314 | 0.3869 |
| 0.6784 | 11.2 | 7000 | 0.3396 | 0.4051 |
| 0.6582 | 12.0 | 7500 | 0.3236 | 0.3899 |
| 0.6478 | 12.8 | 8000 | 0.3263 | 0.3832 |
| 0.6277 | 13.6 | 8500 | 0.3139 | 0.3769 |
| 0.6053 | 14.4 | 9000 | 0.2955 | 0.3536 |
| 0.5777 | 15.2 | 9500 | 0.2793 | 0.3413 |
| 0.5631 | 16.0 | 10000 | 0.2789 | 0.3353 |
| 0.5446 | 16.8 | 10500 | 0.2709 | 0.3264 |
| 0.528 | 17.6 | 11000 | 0.2693 | 0.3234 |
| 0.5169 | 18.4 | 11500 | 0.2656 | 0.3193 |
| 0.5041 | 19.2 | 12000 | 0.2575 | 0.3102 |
| 0.4971 | 20.0 | 12500 | 0.2584 | 0.3114 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id samitizerxu/wav2vec2-xls-r-300m-eo --dataset mozilla-foundation/common_voice_7_0 --config eo --split test
```
|
Harveenchadha/hindi_large_wav2vec2
|
Harveenchadha
| 2022-03-23T18:28:53Z | 44 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"hi",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:Harveenchadha/indic-voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
language:
- hi
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- hi
- model_for_talk
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- Harveenchadha/indic-voice
model-index:
- name: Hindi Large
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: hi
metrics:
- name: Test WER
type: wer
value: 23.08
- name: Test CER
type: cer
value: 8.11
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice-7.0
type: mozilla-foundation/common_voice_7_0
args: hi
metrics:
- name: Test WER
type: wer
value: 23.36
- name: Test CER
type: cer
value: 8.94
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice-8.0
type: mozilla-foundation/common_voice_8_0
args: hi
metrics:
- name: Test WER
type: wer
value: 24.85
- name: Test CER
type: cer
value: 9.99
---
|
mpoyraz/wav2vec2-xls-r-300m-cv7-turkish
|
mpoyraz
| 2022-03-23T18:28:32Z | 567,583 | 10 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"tr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: cc-by-4.0
language: tr
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
- tr
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: mpoyraz/wav2vec2-xls-r-300m-cv7-turkish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: tr
metrics:
- name: Test WER
type: wer
value: 8.62
- name: Test CER
type: cer
value: 2.26
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: tr
metrics:
- name: Test WER
type: wer
value: 30.87
- name: Test CER
type: cer
value: 10.69
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: tr
metrics:
- name: Test WER
type: wer
value: 32.09
---
# wav2vec2-xls-r-300m-cv7-turkish
## Model description
This ASR model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Turkish language.
## Training and evaluation data
The following datasets were used for finetuning:
- [Common Voice 7.0 TR](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) All `validated` split except `test` split was used for training.
- [MediaSpeech](https://www.openslr.org/108/)
## Training procedure
To support both of the datasets above, custom pre-processing and loading steps was performed and [wav2vec2-turkish](https://github.com/mpoyraz/wav2vec2-turkish) repo was used for that purpose.
### Training hyperparameters
The following hypermaters were used for finetuning:
- learning_rate 2e-4
- num_train_epochs 10
- warmup_steps 500
- freeze_feature_extractor
- mask_time_prob 0.1
- mask_feature_prob 0.05
- feat_proj_dropout 0.05
- attention_dropout 0.05
- final_dropout 0.05
- activation_dropout 0.05
- per_device_train_batch_size 8
- per_device_eval_batch_size 8
- gradient_accumulation_steps 8
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
## Language Model
N-gram language model is trained on a Turkish Wikipedia articles using KenLM and [ngram-lm-wiki](https://github.com/mpoyraz/ngram-lm-wiki) repo was used to generate arpa LM and convert it into binary format.
## Evaluation Commands
Please install [unicode_tr](https://pypi.org/project/unicode_tr/) package before running evaluation. It is used for Turkish text processing.
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id mpoyraz/wav2vec2-xls-r-300m-cv7-turkish --dataset mozilla-foundation/common_voice_7_0 --config tr --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id mpoyraz/wav2vec2-xls-r-300m-cv7-turkish --dataset speech-recognition-community-v2/dev_data --config tr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Evaluation results:
| Dataset | WER | CER |
|---|---|---|
|Common Voice 7 TR test split| 8.62 | 2.26 |
|Speech Recognition Community dev data| 30.87 | 10.69 |
|
manifoldix/xlsr-fa-lm
|
manifoldix
| 2022-03-23T18:28:30Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"robust-speech-event",
"fa",
"dataset:common_voice",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: fa
datasets:
- common_voice
tags:
- hf-asr-leaderboard
- robust-speech-event
widget:
- example_title: Common Voice sample 2978
src: https://huggingface.co/manifoldix/xlsr-fa-lm/resolve/main/sample2978.flac
- example_title: Common Voice sample 5168
src: https://huggingface.co/manifoldix/xlsr-fa-lm/resolve/main/sample5168.flac
model-index:
- name: XLS-R-300m Wav2Vec2 Persian
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fa
type: common_voice
args: fa
metrics:
- name: Test WER without LM
type: wer
value: 26%
- name: Test WER with LM
type: wer
value: 23%
---
## XLSR-300m Persian
Fine-tuned on commom voice FA
|
infinitejoy/wav2vec2-large-xls-r-300m-arabic
|
infinitejoy
| 2022-03-23T18:28:27Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ar
license: apache-2.0
tags:
- ar
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Arabic
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: ar
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ar
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300m-SV
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AR dataset.
It achieves the following results on the evaluation set:
- Loss: NA
- Wer: NA
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py \
--model_id infinitejoy/wav2vec2-large-xls-r-300m-arabic \
--dataset mozilla-foundation/common_voice_7_0 --config ar --split test --log_outputs
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py \
--model_id infinitejoy/wav2vec2-large-xls-r-300m-arabic --dataset speech-recognition-community-v2/dev_data \
--config ar --split validation --chunk_length_s 10 --stride_length_s 1
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "infinitejoy/wav2vec2-large-xls-r-300m-arabic"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "ar", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
```
### Eval results on Common Voice 7 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| NA | NA |
|
edugp/wav2vec2-xls-r-300m-36-tokens-with-lm-es
|
edugp
| 2022-03-23T18:28:19Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language:
- es
tags:
- es
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-36-tokens-with-lm-es
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice es
type: common_voice
args: es
metrics:
- name: Test WER
type: wer
value: 0.08677014042867702
- name: Test CER
type: cer
value: 0.02810974186831335
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: es
metrics:
- name: Test WER
type: wer
value: 31.68
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: es
metrics:
- name: Test WER
type: wer
value: 34.45
---
# Wav2Vec2-xls-r-300m-36-tokens-with-lm-es
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Wer: 0.0868
- Cer: 0.0281
This model consists of a Wav2Vec2 model with an additional KenLM 5-gram language model for CTC decoding.
The model is trained removing all characters that are not lower-case unaccented letters between `a-z` or the Spanish accented vowels `á`, `é`, `í`, `ó`, `ú` or the dieresis on the vowel `u` - `ü`.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| 3.6512 | 0.07 | 400 | 0.5734 | 0.4325 |
| 0.4404 | 0.14 | 800 | 0.3329 | 0.3021 |
| 0.3465 | 0.22 | 1200 | 0.3067 | 0.2871 |
| 0.3214 | 0.29 | 1600 | 0.2808 | 0.2694 |
| 0.319 | 0.36 | 2000 | 0.2755 | 0.2677 |
| 0.3015 | 0.43 | 2400 | 0.2667 | 0.2437 |
| 0.3102 | 0.51 | 2800 | 0.2679 | 0.2475 |
| 0.2955 | 0.58 | 3200 | 0.2591 | 0.2421 |
| 0.292 | 0.65 | 3600 | 0.2547 | 0.2404 |
| 0.2961 | 0.72 | 4000 | 0.2824 | 0.2716 |
| 0.2906 | 0.8 | 4400 | 0.2531 | 0.2321 |
| 0.2886 | 0.87 | 4800 | 0.2668 | 0.2573 |
| 0.2934 | 0.94 | 5200 | 0.2608 | 0.2454 |
| 0.2844 | 1.01 | 5600 | 0.2414 | 0.2233 |
| 0.2649 | 1.09 | 6000 | 0.2412 | 0.2198 |
| 0.2587 | 1.16 | 6400 | 0.2432 | 0.2211 |
| 0.2631 | 1.23 | 6800 | 0.2414 | 0.2225 |
| 0.2584 | 1.3 | 7200 | 0.2489 | 0.2290 |
| 0.2588 | 1.37 | 7600 | 0.2341 | 0.2156 |
| 0.2581 | 1.45 | 8000 | 0.2323 | 0.2155 |
| 0.2603 | 1.52 | 8400 | 0.2423 | 0.2231 |
| 0.2527 | 1.59 | 8800 | 0.2381 | 0.2192 |
| 0.2588 | 1.66 | 9200 | 0.2323 | 0.2176 |
| 0.2543 | 1.74 | 9600 | 0.2391 | 0.2151 |
| 0.2528 | 1.81 | 10000 | 0.2295 | 0.2091 |
| 0.2535 | 1.88 | 10400 | 0.2317 | 0.2099 |
| 0.2501 | 1.95 | 10800 | 0.2225 | 0.2105 |
| 0.2441 | 2.03 | 11200 | 0.2356 | 0.2180 |
| 0.2275 | 2.1 | 11600 | 0.2341 | 0.2115 |
| 0.2281 | 2.17 | 12000 | 0.2269 | 0.2117 |
| 0.227 | 2.24 | 12400 | 0.2367 | 0.2125 |
| 0.2471 | 2.32 | 12800 | 0.2307 | 0.2090 |
| 0.229 | 2.39 | 13200 | 0.2231 | 0.2005 |
| 0.2325 | 2.46 | 13600 | 0.2243 | 0.2100 |
| 0.2314 | 2.53 | 14000 | 0.2252 | 0.2098 |
| 0.2309 | 2.6 | 14400 | 0.2269 | 0.2089 |
| 0.2267 | 2.68 | 14800 | 0.2155 | 0.1976 |
| 0.225 | 2.75 | 15200 | 0.2263 | 0.2067 |
| 0.2309 | 2.82 | 15600 | 0.2196 | 0.2041 |
| 0.225 | 2.89 | 16000 | 0.2212 | 0.2052 |
| 0.228 | 2.97 | 16400 | 0.2192 | 0.2028 |
| 0.2136 | 3.04 | 16800 | 0.2169 | 0.2042 |
| 0.2038 | 3.11 | 17200 | 0.2173 | 0.1998 |
| 0.2035 | 3.18 | 17600 | 0.2185 | 0.2002 |
| 0.207 | 3.26 | 18000 | 0.2358 | 0.2120 |
| 0.2102 | 3.33 | 18400 | 0.2213 | 0.2019 |
| 0.211 | 3.4 | 18800 | 0.2176 | 0.1980 |
| 0.2099 | 3.47 | 19200 | 0.2186 | 0.1960 |
| 0.2093 | 3.55 | 19600 | 0.2208 | 0.2016 |
| 0.2046 | 3.62 | 20000 | 0.2138 | 0.1960 |
| 0.2095 | 3.69 | 20400 | 0.2222 | 0.2023 |
| 0.2106 | 3.76 | 20800 | 0.2159 | 0.1964 |
| 0.2066 | 3.83 | 21200 | 0.2083 | 0.1931 |
| 0.2119 | 3.91 | 21600 | 0.2130 | 0.1957 |
| 0.2167 | 3.98 | 22000 | 0.2210 | 0.1987 |
| 0.1973 | 4.05 | 22400 | 0.2112 | 0.1930 |
| 0.1917 | 4.12 | 22800 | 0.2107 | 0.1891 |
| 0.1903 | 4.2 | 23200 | 0.2132 | 0.1911 |
| 0.1903 | 4.27 | 23600 | 0.2077 | 0.1883 |
| 0.1914 | 4.34 | 24000 | 0.2054 | 0.1901 |
| 0.1943 | 4.41 | 24400 | 0.2059 | 0.1885 |
| 0.1943 | 4.49 | 24800 | 0.2095 | 0.1899 |
| 0.1936 | 4.56 | 25200 | 0.2078 | 0.1879 |
| 0.1963 | 4.63 | 25600 | 0.2018 | 0.1884 |
| 0.1934 | 4.7 | 26000 | 0.2034 | 0.1872 |
| 0.2011 | 4.78 | 26400 | 0.2051 | 0.1896 |
| 0.1901 | 4.85 | 26800 | 0.2059 | 0.1858 |
| 0.1934 | 4.92 | 27200 | 0.2028 | 0.1832 |
| 0.191 | 4.99 | 27600 | 0.2046 | 0.1870 |
| 0.1775 | 5.07 | 28000 | 0.2081 | 0.1891 |
| 0.175 | 5.14 | 28400 | 0.2084 | 0.1904 |
| 0.19 | 5.21 | 28800 | 0.2086 | 0.1920 |
| 0.1798 | 5.28 | 29200 | 0.2079 | 0.1935 |
| 0.1765 | 5.35 | 29600 | 0.2145 | 0.1930 |
| 0.181 | 5.43 | 30000 | 0.2062 | 0.1918 |
| 0.1808 | 5.5 | 30400 | 0.2083 | 0.1875 |
| 0.1769 | 5.57 | 30800 | 0.2117 | 0.1895 |
| 0.1788 | 5.64 | 31200 | 0.2055 | 0.1857 |
| 0.181 | 5.72 | 31600 | 0.2057 | 0.1870 |
| 0.1781 | 5.79 | 32000 | 0.2053 | 0.1872 |
| 0.1852 | 5.86 | 32400 | 0.2077 | 0.1904 |
| 0.1832 | 5.93 | 32800 | 0.1979 | 0.1821 |
| 0.1758 | 6.01 | 33200 | 0.1957 | 0.1754 |
| 0.1611 | 6.08 | 33600 | 0.2028 | 0.1773 |
| 0.1606 | 6.15 | 34000 | 0.2018 | 0.1780 |
| 0.1702 | 6.22 | 34400 | 0.1977 | 0.1759 |
| 0.1649 | 6.3 | 34800 | 0.2073 | 0.1845 |
| 0.1641 | 6.37 | 35200 | 0.1947 | 0.1774 |
| 0.1703 | 6.44 | 35600 | 0.2009 | 0.1811 |
| 0.1716 | 6.51 | 36000 | 0.2091 | 0.1817 |
| 0.1732 | 6.58 | 36400 | 0.1942 | 0.1743 |
| 0.1642 | 6.66 | 36800 | 0.1930 | 0.1749 |
| 0.1685 | 6.73 | 37200 | 0.1962 | 0.1716 |
| 0.1647 | 6.8 | 37600 | 0.1977 | 0.1822 |
| 0.1647 | 6.87 | 38000 | 0.1917 | 0.1748 |
| 0.1667 | 6.95 | 38400 | 0.1948 | 0.1774 |
| 0.1647 | 7.02 | 38800 | 0.2018 | 0.1783 |
| 0.15 | 7.09 | 39200 | 0.2010 | 0.1796 |
| 0.1663 | 7.16 | 39600 | 0.1969 | 0.1731 |
| 0.1536 | 7.24 | 40000 | 0.1935 | 0.1726 |
| 0.1544 | 7.31 | 40400 | 0.2030 | 0.1799 |
| 0.1536 | 7.38 | 40800 | 0.1973 | 0.1772 |
| 0.1559 | 7.45 | 41200 | 0.1973 | 0.1763 |
| 0.1547 | 7.53 | 41600 | 0.2052 | 0.1782 |
| 0.1584 | 7.6 | 42000 | 0.1965 | 0.1737 |
| 0.1542 | 7.67 | 42400 | 0.1878 | 0.1725 |
| 0.1525 | 7.74 | 42800 | 0.1946 | 0.1750 |
| 0.1547 | 7.81 | 43200 | 0.1934 | 0.1691 |
| 0.1534 | 7.89 | 43600 | 0.1919 | 0.1711 |
| 0.1574 | 7.96 | 44000 | 0.1935 | 0.1745 |
| 0.1471 | 8.03 | 44400 | 0.1915 | 0.1689 |
| 0.1433 | 8.1 | 44800 | 0.1956 | 0.1719 |
| 0.1433 | 8.18 | 45200 | 0.1980 | 0.1720 |
| 0.1424 | 8.25 | 45600 | 0.1906 | 0.1681 |
| 0.1428 | 8.32 | 46000 | 0.1892 | 0.1649 |
| 0.1424 | 8.39 | 46400 | 0.1916 | 0.1698 |
| 0.1466 | 8.47 | 46800 | 0.1970 | 0.1739 |
| 0.1496 | 8.54 | 47200 | 0.1902 | 0.1662 |
| 0.1408 | 8.61 | 47600 | 0.1858 | 0.1649 |
| 0.1445 | 8.68 | 48000 | 0.1893 | 0.1648 |
| 0.1459 | 8.76 | 48400 | 0.1875 | 0.1686 |
| 0.1433 | 8.83 | 48800 | 0.1920 | 0.1673 |
| 0.1448 | 8.9 | 49200 | 0.1833 | 0.1631 |
| 0.1461 | 8.97 | 49600 | 0.1904 | 0.1693 |
| 0.1451 | 9.04 | 50000 | 0.1969 | 0.1661 |
| 0.1336 | 9.12 | 50400 | 0.1950 | 0.1674 |
| 0.1362 | 9.19 | 50800 | 0.1971 | 0.1685 |
| 0.1316 | 9.26 | 51200 | 0.1928 | 0.1648 |
| 0.132 | 9.33 | 51600 | 0.1908 | 0.1615 |
| 0.1301 | 9.41 | 52000 | 0.1842 | 0.1569 |
| 0.1322 | 9.48 | 52400 | 0.1892 | 0.1616 |
| 0.1391 | 9.55 | 52800 | 0.1956 | 0.1656 |
| 0.132 | 9.62 | 53200 | 0.1876 | 0.1598 |
| 0.1349 | 9.7 | 53600 | 0.1870 | 0.1624 |
| 0.1325 | 9.77 | 54000 | 0.1834 | 0.1586 |
| 0.1389 | 9.84 | 54400 | 0.1892 | 0.1647 |
| 0.1364 | 9.91 | 54800 | 0.1840 | 0.1597 |
| 0.1339 | 9.99 | 55200 | 0.1858 | 0.1626 |
| 0.1269 | 10.06 | 55600 | 0.1875 | 0.1619 |
| 0.1229 | 10.13 | 56000 | 0.1909 | 0.1619 |
| 0.1258 | 10.2 | 56400 | 0.1933 | 0.1631 |
| 0.1256 | 10.27 | 56800 | 0.1930 | 0.1640 |
| 0.1207 | 10.35 | 57200 | 0.1823 | 0.1585 |
| 0.1248 | 10.42 | 57600 | 0.1889 | 0.1596 |
| 0.1264 | 10.49 | 58000 | 0.1845 | 0.1584 |
| 0.1251 | 10.56 | 58400 | 0.1869 | 0.1588 |
| 0.1251 | 10.64 | 58800 | 0.1885 | 0.1613 |
| 0.1276 | 10.71 | 59200 | 0.1855 | 0.1575 |
| 0.1303 | 10.78 | 59600 | 0.1836 | 0.1597 |
| 0.1246 | 10.85 | 60000 | 0.1810 | 0.1573 |
| 0.1283 | 10.93 | 60400 | 0.1830 | 0.1581 |
| 0.1273 | 11.0 | 60800 | 0.1837 | 0.1619 |
| 0.1202 | 11.07 | 61200 | 0.1865 | 0.1588 |
| 0.119 | 11.14 | 61600 | 0.1889 | 0.1580 |
| 0.1179 | 11.22 | 62000 | 0.1884 | 0.1592 |
| 0.1187 | 11.29 | 62400 | 0.1824 | 0.1565 |
| 0.1198 | 11.36 | 62800 | 0.1848 | 0.1552 |
| 0.1154 | 11.43 | 63200 | 0.1866 | 0.1565 |
| 0.1211 | 11.51 | 63600 | 0.1862 | 0.1563 |
| 0.1177 | 11.58 | 64000 | 0.1816 | 0.1527 |
| 0.1156 | 11.65 | 64400 | 0.1834 | 0.1540 |
| 0.1144 | 11.72 | 64800 | 0.1837 | 0.1524 |
| 0.119 | 11.79 | 65200 | 0.1859 | 0.1538 |
| 0.1183 | 11.87 | 65600 | 0.1869 | 0.1558 |
| 0.122 | 11.94 | 66000 | 0.1853 | 0.1535 |
| 0.1197 | 12.01 | 66400 | 0.1871 | 0.1586 |
| 0.1096 | 12.08 | 66800 | 0.1838 | 0.1540 |
| 0.1074 | 12.16 | 67200 | 0.1915 | 0.1592 |
| 0.1084 | 12.23 | 67600 | 0.1845 | 0.1545 |
| 0.1097 | 12.3 | 68000 | 0.1904 | 0.1552 |
| 0.112 | 12.37 | 68400 | 0.1846 | 0.1578 |
| 0.1109 | 12.45 | 68800 | 0.1862 | 0.1549 |
| 0.1114 | 12.52 | 69200 | 0.1889 | 0.1552 |
| 0.1119 | 12.59 | 69600 | 0.1828 | 0.1530 |
| 0.1124 | 12.66 | 70000 | 0.1822 | 0.1540 |
| 0.1127 | 12.74 | 70400 | 0.1865 | 0.1589 |
| 0.1128 | 12.81 | 70800 | 0.1786 | 0.1498 |
| 0.1069 | 12.88 | 71200 | 0.1813 | 0.1522 |
| 0.1069 | 12.95 | 71600 | 0.1895 | 0.1558 |
| 0.1083 | 13.02 | 72000 | 0.1925 | 0.1557 |
| 0.1009 | 13.1 | 72400 | 0.1883 | 0.1522 |
| 0.1007 | 13.17 | 72800 | 0.1829 | 0.1480 |
| 0.1014 | 13.24 | 73200 | 0.1861 | 0.1510 |
| 0.0974 | 13.31 | 73600 | 0.1836 | 0.1486 |
| 0.1006 | 13.39 | 74000 | 0.1821 | 0.1462 |
| 0.0973 | 13.46 | 74400 | 0.1857 | 0.1484 |
| 0.1011 | 13.53 | 74800 | 0.1822 | 0.1471 |
| 0.1031 | 13.6 | 75200 | 0.1823 | 0.1489 |
| 0.1034 | 13.68 | 75600 | 0.1809 | 0.1452 |
| 0.0998 | 13.75 | 76000 | 0.1817 | 0.1490 |
| 0.1071 | 13.82 | 76400 | 0.1808 | 0.1501 |
| 0.1083 | 13.89 | 76800 | 0.1796 | 0.1475 |
| 0.1053 | 13.97 | 77200 | 0.1785 | 0.1470 |
| 0.0978 | 14.04 | 77600 | 0.1886 | 0.1495 |
| 0.094 | 14.11 | 78000 | 0.1854 | 0.1489 |
| 0.0915 | 14.18 | 78400 | 0.1854 | 0.1498 |
| 0.0947 | 14.25 | 78800 | 0.1888 | 0.1500 |
| 0.0939 | 14.33 | 79200 | 0.1885 | 0.1494 |
| 0.0973 | 14.4 | 79600 | 0.1877 | 0.1466 |
| 0.0946 | 14.47 | 80000 | 0.1904 | 0.1494 |
| 0.0931 | 14.54 | 80400 | 0.1815 | 0.1473 |
| 0.0958 | 14.62 | 80800 | 0.1905 | 0.1508 |
| 0.0982 | 14.69 | 81200 | 0.1881 | 0.1511 |
| 0.0963 | 14.76 | 81600 | 0.1823 | 0.1449 |
| 0.0943 | 14.83 | 82000 | 0.1782 | 0.1458 |
| 0.0981 | 14.91 | 82400 | 0.1795 | 0.1465 |
| 0.0995 | 14.98 | 82800 | 0.1811 | 0.1484 |
| 0.0909 | 15.05 | 83200 | 0.1822 | 0.1450 |
| 0.0872 | 15.12 | 83600 | 0.1890 | 0.1466 |
| 0.0878 | 15.2 | 84000 | 0.1859 | 0.1468 |
| 0.0884 | 15.27 | 84400 | 0.1825 | 0.1429 |
| 0.0871 | 15.34 | 84800 | 0.1816 | 0.1438 |
| 0.0883 | 15.41 | 85200 | 0.1817 | 0.1433 |
| 0.0844 | 15.48 | 85600 | 0.1821 | 0.1412 |
| 0.0843 | 15.56 | 86000 | 0.1863 | 0.1411 |
| 0.0805 | 15.63 | 86400 | 0.1863 | 0.1441 |
| 0.085 | 15.7 | 86800 | 0.1808 | 0.1440 |
| 0.0848 | 15.77 | 87200 | 0.1808 | 0.1421 |
| 0.0844 | 15.85 | 87600 | 0.1841 | 0.1406 |
| 0.082 | 15.92 | 88000 | 0.1850 | 0.1442 |
| 0.0854 | 15.99 | 88400 | 0.1773 | 0.1426 |
| 0.0835 | 16.06 | 88800 | 0.1888 | 0.1436 |
| 0.0789 | 16.14 | 89200 | 0.1922 | 0.1434 |
| 0.081 | 16.21 | 89600 | 0.1864 | 0.1448 |
| 0.0799 | 16.28 | 90000 | 0.1902 | 0.1428 |
| 0.0848 | 16.35 | 90400 | 0.1873 | 0.1422 |
| 0.084 | 16.43 | 90800 | 0.1835 | 0.1421 |
| 0.083 | 16.5 | 91200 | 0.1878 | 0.1390 |
| 0.0794 | 16.57 | 91600 | 0.1877 | 0.1398 |
| 0.0807 | 16.64 | 92000 | 0.1800 | 0.1385 |
| 0.0829 | 16.71 | 92400 | 0.1910 | 0.1434 |
| 0.0839 | 16.79 | 92800 | 0.1843 | 0.1381 |
| 0.0815 | 16.86 | 93200 | 0.1812 | 0.1365 |
| 0.0831 | 16.93 | 93600 | 0.1889 | 0.1383 |
| 0.0803 | 17.0 | 94000 | 0.1902 | 0.1403 |
| 0.0724 | 17.08 | 94400 | 0.1934 | 0.1380 |
| 0.0734 | 17.15 | 94800 | 0.1865 | 0.1394 |
| 0.0739 | 17.22 | 95200 | 0.1876 | 0.1395 |
| 0.0758 | 17.29 | 95600 | 0.1938 | 0.1411 |
| 0.0733 | 17.37 | 96000 | 0.1933 | 0.1410 |
| 0.077 | 17.44 | 96400 | 0.1848 | 0.1385 |
| 0.0754 | 17.51 | 96800 | 0.1876 | 0.1407 |
| 0.0746 | 17.58 | 97200 | 0.1863 | 0.1371 |
| 0.0732 | 17.66 | 97600 | 0.1927 | 0.1401 |
| 0.0746 | 17.73 | 98000 | 0.1874 | 0.1390 |
| 0.0755 | 17.8 | 98400 | 0.1853 | 0.1381 |
| 0.0724 | 17.87 | 98800 | 0.1849 | 0.1365 |
| 0.0716 | 17.94 | 99200 | 0.1848 | 0.1380 |
| 0.074 | 18.02 | 99600 | 0.1891 | 0.1362 |
| 0.0687 | 18.09 | 100000 | 0.1974 | 0.1357 |
| 0.0651 | 18.16 | 100400 | 0.1942 | 0.1353 |
| 0.0672 | 18.23 | 100800 | 0.1823 | 0.1363 |
| 0.0671 | 18.31 | 101200 | 0.1959 | 0.1357 |
| 0.0684 | 18.38 | 101600 | 0.1959 | 0.1374 |
| 0.0688 | 18.45 | 102000 | 0.1904 | 0.1353 |
| 0.0696 | 18.52 | 102400 | 0.1926 | 0.1364 |
| 0.0661 | 18.6 | 102800 | 0.1905 | 0.1351 |
| 0.0684 | 18.67 | 103200 | 0.1955 | 0.1343 |
| 0.0712 | 18.74 | 103600 | 0.1873 | 0.1353 |
| 0.0701 | 18.81 | 104000 | 0.1822 | 0.1354 |
| 0.0688 | 18.89 | 104400 | 0.1905 | 0.1373 |
| 0.0695 | 18.96 | 104800 | 0.1879 | 0.1335 |
| 0.0661 | 19.03 | 105200 | 0.2005 | 0.1351 |
| 0.0644 | 19.1 | 105600 | 0.1972 | 0.1351 |
| 0.0627 | 19.18 | 106000 | 0.1956 | 0.1340 |
| 0.0633 | 19.25 | 106400 | 0.1962 | 0.1340 |
| 0.0629 | 19.32 | 106800 | 0.1937 | 0.1342 |
| 0.0636 | 19.39 | 107200 | 0.1905 | 0.1355 |
| 0.0631 | 19.46 | 107600 | 0.1917 | 0.1326 |
| 0.0624 | 19.54 | 108000 | 0.1977 | 0.1355 |
| 0.0621 | 19.61 | 108400 | 0.1941 | 0.1345 |
| 0.0635 | 19.68 | 108800 | 0.1949 | 0.1336 |
| 0.063 | 19.75 | 109200 | 0.1919 | 0.1317 |
| 0.0636 | 19.83 | 109600 | 0.1928 | 0.1317 |
| 0.0612 | 19.9 | 110000 | 0.1923 | 0.1314 |
| 0.0636 | 19.97 | 110400 | 0.1923 | 0.1343 |
| 0.0581 | 20.04 | 110800 | 0.2036 | 0.1332 |
| 0.0573 | 20.12 | 111200 | 0.2007 | 0.1315 |
| 0.0566 | 20.19 | 111600 | 0.1974 | 0.1319 |
| 0.0589 | 20.26 | 112000 | 0.1958 | 0.1322 |
| 0.0577 | 20.33 | 112400 | 0.1946 | 0.1307 |
| 0.0587 | 20.41 | 112800 | 0.1957 | 0.1295 |
| 0.0588 | 20.48 | 113200 | 0.2013 | 0.1306 |
| 0.0594 | 20.55 | 113600 | 0.2010 | 0.1312 |
| 0.0602 | 20.62 | 114000 | 0.1993 | 0.1314 |
| 0.0583 | 20.69 | 114400 | 0.1931 | 0.1297 |
| 0.059 | 20.77 | 114800 | 0.1974 | 0.1305 |
| 0.0566 | 20.84 | 115200 | 0.1979 | 0.1294 |
| 0.0588 | 20.91 | 115600 | 0.1944 | 0.1292 |
| 0.0569 | 20.98 | 116000 | 0.1974 | 0.1309 |
| 0.0554 | 21.06 | 116400 | 0.2080 | 0.1307 |
| 0.0542 | 21.13 | 116800 | 0.2056 | 0.1301 |
| 0.0532 | 21.2 | 117200 | 0.2027 | 0.1309 |
| 0.0535 | 21.27 | 117600 | 0.1970 | 0.1287 |
| 0.0533 | 21.35 | 118000 | 0.2124 | 0.1310 |
| 0.0546 | 21.42 | 118400 | 0.2043 | 0.1300 |
| 0.0544 | 21.49 | 118800 | 0.2056 | 0.1281 |
| 0.0562 | 21.56 | 119200 | 0.1986 | 0.1273 |
| 0.0549 | 21.64 | 119600 | 0.2075 | 0.1283 |
| 0.0522 | 21.71 | 120000 | 0.2058 | 0.1278 |
| 0.052 | 21.78 | 120400 | 0.2057 | 0.1280 |
| 0.0563 | 21.85 | 120800 | 0.1966 | 0.1295 |
| 0.0546 | 21.92 | 121200 | 0.2002 | 0.1285 |
| 0.0539 | 22.0 | 121600 | 0.1996 | 0.1279 |
| 0.0504 | 22.07 | 122000 | 0.2077 | 0.1273 |
| 0.0602 | 22.14 | 122400 | 0.2055 | 0.1278 |
| 0.0503 | 22.21 | 122800 | 0.2037 | 0.1283 |
| 0.0496 | 22.29 | 123200 | 0.2109 | 0.1279 |
| 0.0523 | 22.36 | 123600 | 0.2068 | 0.1276 |
| 0.0508 | 22.43 | 124000 | 0.2051 | 0.1257 |
| 0.0505 | 22.5 | 124400 | 0.2056 | 0.1269 |
| 0.05 | 22.58 | 124800 | 0.1995 | 0.1268 |
| 0.0496 | 22.65 | 125200 | 0.2022 | 0.1290 |
| 0.0484 | 22.72 | 125600 | 0.2095 | 0.1291 |
| 0.0518 | 22.79 | 126000 | 0.2132 | 0.1271 |
| 0.0499 | 22.87 | 126400 | 0.2124 | 0.1263 |
| 0.0485 | 22.94 | 126800 | 0.2092 | 0.1252 |
| 0.0476 | 23.01 | 127200 | 0.2138 | 0.1256 |
| 0.0467 | 23.08 | 127600 | 0.2119 | 0.1256 |
| 0.048 | 23.15 | 128000 | 0.2138 | 0.1269 |
| 0.0461 | 23.23 | 128400 | 0.2036 | 0.1244 |
| 0.0467 | 23.3 | 128800 | 0.2163 | 0.1255 |
| 0.0475 | 23.37 | 129200 | 0.2180 | 0.1258 |
| 0.0468 | 23.44 | 129600 | 0.2129 | 0.1245 |
| 0.0456 | 23.52 | 130000 | 0.2122 | 0.1250 |
| 0.0458 | 23.59 | 130400 | 0.2157 | 0.1257 |
| 0.0453 | 23.66 | 130800 | 0.2088 | 0.1242 |
| 0.045 | 23.73 | 131200 | 0.2144 | 0.1247 |
| 0.0469 | 23.81 | 131600 | 0.2113 | 0.1246 |
| 0.0453 | 23.88 | 132000 | 0.2151 | 0.1234 |
| 0.0471 | 23.95 | 132400 | 0.2130 | 0.1229 |
| 0.0443 | 24.02 | 132800 | 0.2150 | 0.1225 |
| 0.0446 | 24.1 | 133200 | 0.2166 | 0.1235 |
| 0.0435 | 24.17 | 133600 | 0.2143 | 0.1222 |
| 0.0407 | 24.24 | 134000 | 0.2175 | 0.1218 |
| 0.0421 | 24.31 | 134400 | 0.2147 | 0.1227 |
| 0.0435 | 24.38 | 134800 | 0.2193 | 0.1233 |
| 0.0414 | 24.46 | 135200 | 0.2172 | 0.1225 |
| 0.0419 | 24.53 | 135600 | 0.2156 | 0.1225 |
| 0.0419 | 24.6 | 136000 | 0.2143 | 0.1235 |
| 0.0423 | 24.67 | 136400 | 0.2179 | 0.1226 |
| 0.0423 | 24.75 | 136800 | 0.2144 | 0.1221 |
| 0.0424 | 24.82 | 137200 | 0.2135 | 0.1210 |
| 0.0419 | 24.89 | 137600 | 0.2166 | 0.1218 |
| 0.0408 | 24.96 | 138000 | 0.2151 | 0.1211 |
| 0.0433 | 25.04 | 138400 | 0.2174 | 0.1214 |
| 0.0395 | 25.11 | 138800 | 0.2242 | 0.1210 |
| 0.0403 | 25.18 | 139200 | 0.2219 | 0.1215 |
| 0.0413 | 25.25 | 139600 | 0.2225 | 0.1207 |
| 0.0389 | 25.33 | 140000 | 0.2187 | 0.1202 |
| 0.0395 | 25.4 | 140400 | 0.2244 | 0.1204 |
| 0.0398 | 25.47 | 140800 | 0.2263 | 0.1199 |
| 0.0386 | 25.54 | 141200 | 0.2165 | 0.1187 |
| 0.0396 | 25.61 | 141600 | 0.2171 | 0.1187 |
| 0.0406 | 25.69 | 142000 | 0.2199 | 0.1190 |
| 0.0404 | 25.76 | 142400 | 0.2224 | 0.1190 |
| 0.0391 | 25.83 | 142800 | 0.2230 | 0.1185 |
| 0.04 | 25.9 | 143200 | 0.2208 | 0.1200 |
| 0.0396 | 25.98 | 143600 | 0.2179 | 0.1191 |
| 0.0353 | 26.05 | 144000 | 0.2285 | 0.1178 |
| 0.0368 | 26.12 | 144400 | 0.2273 | 0.1186 |
| 0.0393 | 26.19 | 144800 | 0.2247 | 0.1196 |
| 0.0368 | 26.27 | 145200 | 0.2314 | 0.1181 |
| 0.0373 | 26.34 | 145600 | 0.2215 | 0.1188 |
| 0.038 | 26.41 | 146000 | 0.2262 | 0.1180 |
| 0.0363 | 26.48 | 146400 | 0.2250 | 0.1172 |
| 0.0365 | 26.56 | 146800 | 0.2299 | 0.1174 |
| 0.0382 | 26.63 | 147200 | 0.2292 | 0.1165 |
| 0.0365 | 26.7 | 147600 | 0.2282 | 0.1165 |
| 0.0371 | 26.77 | 148000 | 0.2276 | 0.1172 |
| 0.0365 | 26.85 | 148400 | 0.2280 | 0.1173 |
| 0.0376 | 26.92 | 148800 | 0.2248 | 0.1164 |
| 0.0365 | 26.99 | 149200 | 0.2230 | 0.1158 |
| 0.0343 | 27.06 | 149600 | 0.2300 | 0.1157 |
| 0.0354 | 27.13 | 150000 | 0.2298 | 0.1166 |
| 0.0333 | 27.21 | 150400 | 0.2307 | 0.1158 |
| 0.0353 | 27.28 | 150800 | 0.2300 | 0.1157 |
| 0.036 | 27.35 | 151200 | 0.2335 | 0.1160 |
| 0.0343 | 27.42 | 151600 | 0.2324 | 0.1155 |
| 0.0361 | 27.5 | 152000 | 0.2300 | 0.1150 |
| 0.0352 | 27.57 | 152400 | 0.2279 | 0.1146 |
| 0.0353 | 27.64 | 152800 | 0.2307 | 0.1149 |
| 0.0342 | 27.71 | 153200 | 0.2315 | 0.1152 |
| 0.0345 | 27.79 | 153600 | 0.2290 | 0.1146 |
| 0.034 | 27.86 | 154000 | 0.2319 | 0.1141 |
| 0.0347 | 27.93 | 154400 | 0.2312 | 0.1144 |
| 0.0338 | 28.0 | 154800 | 0.2328 | 0.1146 |
| 0.0347 | 28.08 | 155200 | 0.2352 | 0.1151 |
| 0.033 | 28.15 | 155600 | 0.2337 | 0.1142 |
| 0.0336 | 28.22 | 156000 | 0.2345 | 0.1141 |
| 0.0337 | 28.29 | 156400 | 0.2315 | 0.1143 |
| 0.0314 | 28.36 | 156800 | 0.2353 | 0.1140 |
| 0.0333 | 28.44 | 157200 | 0.2338 | 0.1146 |
| 0.0317 | 28.51 | 157600 | 0.2345 | 0.1139 |
| 0.0326 | 28.58 | 158000 | 0.2336 | 0.1143 |
| 0.033 | 28.65 | 158400 | 0.2352 | 0.1137 |
| 0.0325 | 28.73 | 158800 | 0.2312 | 0.1130 |
| 0.0321 | 28.8 | 159200 | 0.2338 | 0.1133 |
| 0.0334 | 28.87 | 159600 | 0.2335 | 0.1130 |
| 0.0317 | 28.94 | 160000 | 0.2340 | 0.1126 |
| 0.0321 | 29.02 | 160400 | 0.2349 | 0.1126 |
| 0.032 | 29.09 | 160800 | 0.2369 | 0.1127 |
| 0.0312 | 29.16 | 161200 | 0.2363 | 0.1124 |
| 0.0303 | 29.23 | 161600 | 0.2363 | 0.1123 |
| 0.0322 | 29.31 | 162000 | 0.2354 | 0.1124 |
| 0.03 | 29.38 | 162400 | 0.2360 | 0.1122 |
| 0.0299 | 29.45 | 162800 | 0.2378 | 0.1124 |
| 0.0313 | 29.52 | 163200 | 0.2377 | 0.1120 |
| 0.0299 | 29.59 | 163600 | 0.2367 | 0.1124 |
| 0.0313 | 29.67 | 164000 | 0.2380 | 0.1120 |
| 0.031 | 29.74 | 164400 | 0.2369 | 0.1120 |
| 0.0327 | 29.81 | 164800 | 0.2358 | 0.1117 |
| 0.0316 | 29.88 | 165200 | 0.2358 | 0.1118 |
| 0.0307 | 29.96 | 165600 | 0.2362 | 0.1118 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
yaswanth/xls-r-300m-yaswanth-hindi2
|
yaswanth
| 2022-03-23T18:28:10Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"hi",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: xls-r-300m-yaswanth-hindi2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-yaswanth-hindi2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7163
- Wer: 0.6951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.986 | 4.46 | 500 | 2.0194 | 1.1857 |
| 0.9232 | 8.93 | 1000 | 1.2665 | 0.8435 |
| 0.5094 | 13.39 | 1500 | 1.2473 | 0.7893 |
| 0.3618 | 17.86 | 2000 | 1.3675 | 0.7789 |
| 0.2914 | 22.32 | 2500 | 1.3725 | 0.7914 |
| 0.2462 | 26.79 | 3000 | 1.4567 | 0.7795 |
| 0.228 | 31.25 | 3500 | 1.6179 | 0.7872 |
| 0.1995 | 35.71 | 4000 | 1.4932 | 0.7555 |
| 0.1878 | 40.18 | 4500 | 1.5352 | 0.7480 |
| 0.165 | 44.64 | 5000 | 1.5238 | 0.7440 |
| 0.1514 | 49.11 | 5500 | 1.5842 | 0.7498 |
| 0.1416 | 53.57 | 6000 | 1.6662 | 0.7524 |
| 0.1351 | 58.04 | 6500 | 1.6280 | 0.7356 |
| 0.1196 | 62.5 | 7000 | 1.6329 | 0.7250 |
| 0.1109 | 66.96 | 7500 | 1.6435 | 0.7302 |
| 0.1008 | 71.43 | 8000 | 1.7058 | 0.7170 |
| 0.0907 | 75.89 | 8500 | 1.6880 | 0.7387 |
| 0.0816 | 80.36 | 9000 | 1.6957 | 0.7031 |
| 0.0743 | 84.82 | 9500 | 1.7547 | 0.7222 |
| 0.0694 | 89.29 | 10000 | 1.6974 | 0.7117 |
| 0.0612 | 93.75 | 10500 | 1.7251 | 0.7020 |
| 0.0577 | 98.21 | 11000 | 1.7163 | 0.6951 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
geninhu/xls-asr-vi-40h-1B
|
geninhu
| 2022-03-23T18:27:57Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common-voice",
"hf-asr-leaderboard",
"robust-speech-event",
"vi",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language:
- vi
tags:
- automatic-speech-recognition
- common-voice
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: xls-asr-vi-40h-1B
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: vi
metrics:
- name: Test WER (with LM)
type: wer
value: 25.846
- name: Test CER (with LM)
type: cer
value: 12.961
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: vi
metrics:
- name: Test WER (with LM)
type: wer
value: 31.158
- name: Test CER (with LM)
type: cer
value: 16.179
---
# xls-asr-vi-40h-1B
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on 40 hours of FPT Open Speech Dataset (FOSD) and Common Voice 7.0.
### Benchmark WER result:
| | [VIVOS](https://huggingface.co/datasets/vivos) | [COMMON VOICE 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0)
|---|---|---|---|
|without LM| 25.93 | 34.21 |
|with 4-grams LM| 24.11 | 25.84 | 31.158 |
### Benchmark CER result:
| | [VIVOS](https://huggingface.co/datasets/vivos) | [COMMON VOICE 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0)
|---|---|---|---|
|without LM| 9.24 | 19.94 |
|with 4-grams LM| 10.37 | 12.96 | 16.179 |
## Evaluation
Please use the eval.py file to run the evaluation
```python
python eval.py --model_id geninhu/xls-asr-vi-40h-1B --dataset mozilla-foundation/common_voice_7_0 --config vi --split test --log_outputs
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.6222 | 1.85 | 1500 | 5.9479 | 0.5474 |
| 1.1362 | 3.7 | 3000 | 7.9799 | 0.5094 |
| 0.7814 | 5.56 | 4500 | 5.0330 | 0.4724 |
| 0.6281 | 7.41 | 6000 | 2.3484 | 0.5020 |
| 0.5472 | 9.26 | 7500 | 2.2495 | 0.4793 |
| 0.4827 | 11.11 | 9000 | 1.1530 | 0.4768 |
| 0.4327 | 12.96 | 10500 | 1.6160 | 0.4646 |
| 0.3989 | 14.81 | 12000 | 3.2633 | 0.4703 |
| 0.3522 | 16.67 | 13500 | 2.2337 | 0.4708 |
| 0.3201 | 18.52 | 15000 | 3.6879 | 0.4565 |
| 0.2899 | 20.37 | 16500 | 5.4389 | 0.4599 |
| 0.2776 | 22.22 | 18000 | 3.5284 | 0.4537 |
| 0.2574 | 24.07 | 19500 | 2.1759 | 0.4649 |
| 0.2378 | 25.93 | 21000 | 3.3901 | 0.4448 |
| 0.217 | 27.78 | 22500 | 1.1632 | 0.4565 |
| 0.2115 | 29.63 | 24000 | 1.7441 | 0.4232 |
| 0.1959 | 31.48 | 25500 | 3.4992 | 0.4304 |
| 0.187 | 33.33 | 27000 | 3.6163 | 0.4369 |
| 0.1748 | 35.19 | 28500 | 3.6038 | 0.4467 |
| 0.17 | 37.04 | 30000 | 2.9708 | 0.4362 |
| 0.159 | 38.89 | 31500 | 3.2045 | 0.4279 |
| 0.153 | 40.74 | 33000 | 3.2427 | 0.4287 |
| 0.1463 | 42.59 | 34500 | 3.5439 | 0.4270 |
| 0.139 | 44.44 | 36000 | 3.9381 | 0.4150 |
| 0.1352 | 46.3 | 37500 | 4.1744 | 0.4092 |
| 0.1369 | 48.15 | 39000 | 4.2279 | 0.4154 |
| 0.1273 | 50.0 | 40500 | 4.1691 | 0.4133 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
nouamanetazi/wav2vec2-xls-r-300m-ar-with-lm
|
nouamanetazi
| 2022-03-23T18:27:54Z | 15 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ar
license: apache-2.0
tags:
- ar
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: XLS-R-300M - Arabic
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ar
metrics:
- name: Test WER
type: wer
value: 1.0
- name: Test CER
type: cer
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-ar
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - AR dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.0191
- eval_wer: 1.0
- eval_runtime: 252.2389
- eval_samples_per_second: 30.217
- eval_steps_per_second: 0.476
- epoch: 1.0
- step: 340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
Please use the evaluation script `eval.py` included in the repo.
1. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id nouamanetazi/wav2vec2-xls-r-300m-ar --dataset speech-recognition-community-v2/dev_data --config ar --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
|
lgris/sew-tiny-portuguese-cv
|
lgris
| 2022-03-23T18:27:49Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"sew",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"pt",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- pt
license: apache-2.0
tags:
- generated_from_trainer
- hf-asr-leaderboard
- pt
- robust-speech-event
datasets:
- common_voice
model-index:
- name: sew-tiny-portuguese-cv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 30.02
- name: Test CER
type: cer
value: 10.34
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 56.46
- name: Test CER
type: cer
value: 22.94
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pt
metrics:
- name: Test WER
type: wer
value: 57.17
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: pt
metrics:
- name: Test WER
type: wer
value: 61.3
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-tiny-portuguese-cv
This model is a fine-tuned version of [lgris/sew-tiny-pt](https://huggingface.co/lgris/sew-tiny-pt) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5110
- Wer: 0.2842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 40000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| No log | 4.92 | 1000 | 0.8468 | 0.6494 |
| 3.4638 | 9.85 | 2000 | 0.4978 | 0.3815 |
| 3.4638 | 14.78 | 3000 | 0.4734 | 0.3417 |
| 0.9904 | 19.7 | 4000 | 0.4577 | 0.3344 |
| 0.9904 | 24.63 | 5000 | 0.4376 | 0.3170 |
| 0.8849 | 29.55 | 6000 | 0.4225 | 0.3118 |
| 0.8849 | 34.48 | 7000 | 0.4354 | 0.3080 |
| 0.819 | 39.41 | 8000 | 0.4434 | 0.3004 |
| 0.819 | 44.33 | 9000 | 0.4710 | 0.3132 |
| 0.7706 | 49.26 | 10000 | 0.4497 | 0.3064 |
| 0.7706 | 54.19 | 11000 | 0.4598 | 0.3100 |
| 0.7264 | 59.11 | 12000 | 0.4271 | 0.3013 |
| 0.7264 | 64.04 | 13000 | 0.4333 | 0.2959 |
| 0.6909 | 68.96 | 14000 | 0.4554 | 0.3019 |
| 0.6909 | 73.89 | 15000 | 0.4444 | 0.2888 |
| 0.6614 | 78.81 | 16000 | 0.4734 | 0.3081 |
| 0.6614 | 83.74 | 17000 | 0.4820 | 0.3058 |
| 0.6379 | 88.67 | 18000 | 0.4416 | 0.2950 |
| 0.6379 | 93.59 | 19000 | 0.4614 | 0.2974 |
| 0.6055 | 98.52 | 20000 | 0.4812 | 0.3018 |
| 0.6055 | 103.45 | 21000 | 0.4700 | 0.3018 |
| 0.5823 | 108.37 | 22000 | 0.4726 | 0.2999 |
| 0.5823 | 113.3 | 23000 | 0.4979 | 0.2887 |
| 0.5597 | 118.23 | 24000 | 0.4813 | 0.2980 |
| 0.5597 | 123.15 | 25000 | 0.4968 | 0.2972 |
| 0.542 | 128.08 | 26000 | 0.5331 | 0.3059 |
| 0.542 | 133.0 | 27000 | 0.5046 | 0.2978 |
| 0.5185 | 137.93 | 28000 | 0.4882 | 0.2922 |
| 0.5185 | 142.85 | 29000 | 0.4945 | 0.2938 |
| 0.499 | 147.78 | 30000 | 0.4971 | 0.2913 |
| 0.499 | 152.71 | 31000 | 0.4948 | 0.2873 |
| 0.4811 | 157.63 | 32000 | 0.4924 | 0.2918 |
| 0.4811 | 162.56 | 33000 | 0.5128 | 0.2911 |
| 0.4679 | 167.49 | 34000 | 0.5098 | 0.2892 |
| 0.4679 | 172.41 | 35000 | 0.4966 | 0.2863 |
| 0.456 | 177.34 | 36000 | 0.5033 | 0.2839 |
| 0.456 | 182.27 | 37000 | 0.5114 | 0.2875 |
| 0.4453 | 187.19 | 38000 | 0.5154 | 0.2859 |
| 0.4453 | 192.12 | 39000 | 0.5102 | 0.2847 |
| 0.4366 | 197.04 | 40000 | 0.5110 | 0.2842 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
w11wo/wav2vec2-xls-r-300m-zh-HK-v2
|
w11wo
| 2022-03-23T18:27:41Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:common_voice",
"arxiv:2111.09296",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: zh-HK
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: Wav2Vec2 XLS-R 300M Cantonese (zh-HK)
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 31.73
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 23.11
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 23.02
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 56.6
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 55.11
---
# Wav2Vec2 XLS-R 300M Cantonese (zh-HK)
Wav2Vec2 XLS-R 300M Cantonese (zh-HK) is an automatic speech recognition model based on the [XLS-R](https://arxiv.org/abs/2111.09296) architecture. This model is a fine-tuned version of [Wav2Vec2-XLS-R-300M](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the `zh-HK` subset of the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
This model was trained using HuggingFace's PyTorch framework and is part of the [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by HuggingFace. All training was done on a Tesla V100, sponsored by OVH.
All necessary scripts used for training could be found in the [Files and versions](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-zh-HK-v2/tree/main) tab, as well as the [Training metrics](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-zh-HK-v2/tensorboard) logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ------------------------------ | ------- | ----- | ------------------------------- |
| `wav2vec2-xls-r-300m-zh-HK-v2` | 300M | XLS-R | `Common Voice zh-HK` Dataset |
## Evaluation Results
The model achieves the following results on evaluation:
| Dataset | Loss | CER |
| -------------------------------- | ------ | ------ |
| `Common Voice` | 0.8089 | 31.73% |
| `Common Voice 7` | N/A | 23.11% |
| `Common Voice 8` | N/A | 23.02% |
| `Robust Speech Event - Dev Data` | N/A | 56.60% |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- `learning_rate`: 0.0001
- `train_batch_size`: 8
- `eval_batch_size`: 8
- `seed`: 42
- `gradient_accumulation_steps`: 4
- `total_train_batch_size`: 32
- `optimizer`: Adam with `betas=(0.9, 0.999)` and `epsilon=1e-08`
- `lr_scheduler_type`: linear
- `lr_scheduler_warmup_steps`: 2000
- `num_epochs`: 100.0
- `mixed_precision_training`: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
| :-----------: | :---: | :---: | :-------------: | :----: | :----: |
| 69.8341 | 1.34 | 500 | 80.0722 | 1.0 | 1.0 |
| 6.6418 | 2.68 | 1000 | 6.6346 | 1.0 | 1.0 |
| 6.2419 | 4.02 | 1500 | 6.2909 | 1.0 | 1.0 |
| 6.0813 | 5.36 | 2000 | 6.1150 | 1.0 | 1.0 |
| 5.9677 | 6.7 | 2500 | 6.0301 | 1.1386 | 1.0028 |
| 5.9296 | 8.04 | 3000 | 5.8975 | 1.2113 | 1.0058 |
| 5.6434 | 9.38 | 3500 | 5.5404 | 2.1624 | 1.0171 |
| 5.1974 | 10.72 | 4000 | 4.5440 | 2.1702 | 0.9366 |
| 4.3601 | 12.06 | 4500 | 3.3839 | 2.2464 | 0.8998 |
| 3.9321 | 13.4 | 5000 | 2.8785 | 2.3097 | 0.8400 |
| 3.6462 | 14.74 | 5500 | 2.5108 | 1.9623 | 0.6663 |
| 3.5156 | 16.09 | 6000 | 2.2790 | 1.6479 | 0.5706 |
| 3.32 | 17.43 | 6500 | 2.1450 | 1.8337 | 0.6244 |
| 3.1918 | 18.77 | 7000 | 1.8536 | 1.9394 | 0.6017 |
| 3.1139 | 20.11 | 7500 | 1.7205 | 1.9112 | 0.5638 |
| 2.8995 | 21.45 | 8000 | 1.5478 | 1.0624 | 0.3250 |
| 2.7572 | 22.79 | 8500 | 1.4068 | 1.1412 | 0.3367 |
| 2.6881 | 24.13 | 9000 | 1.3312 | 2.0100 | 0.5683 |
| 2.5993 | 25.47 | 9500 | 1.2553 | 2.0039 | 0.6450 |
| 2.5304 | 26.81 | 10000 | 1.2422 | 2.0394 | 0.5789 |
| 2.4352 | 28.15 | 10500 | 1.1582 | 1.9970 | 0.5507 |
| 2.3795 | 29.49 | 11000 | 1.1160 | 1.8255 | 0.4844 |
| 2.3287 | 30.83 | 11500 | 1.0775 | 1.4123 | 0.3780 |
| 2.2622 | 32.17 | 12000 | 1.0704 | 1.7445 | 0.4894 |
| 2.2225 | 33.51 | 12500 | 1.0272 | 1.7237 | 0.5058 |
| 2.1843 | 34.85 | 13000 | 0.9756 | 1.8042 | 0.5028 |
| 2.1 | 36.19 | 13500 | 0.9527 | 1.8909 | 0.6055 |
| 2.0741 | 37.53 | 14000 | 0.9418 | 1.9026 | 0.5880 |
| 2.0179 | 38.87 | 14500 | 0.9363 | 1.7977 | 0.5246 |
| 2.0615 | 40.21 | 15000 | 0.9635 | 1.8112 | 0.5599 |
| 1.9448 | 41.55 | 15500 | 0.9249 | 1.7250 | 0.4914 |
| 1.8966 | 42.89 | 16000 | 0.9023 | 1.5829 | 0.4319 |
| 1.8662 | 44.24 | 16500 | 0.9002 | 1.4833 | 0.4230 |
| 1.8136 | 45.58 | 17000 | 0.9076 | 1.1828 | 0.2987 |
| 1.7908 | 46.92 | 17500 | 0.8774 | 1.5773 | 0.4258 |
| 1.7354 | 48.26 | 18000 | 0.8727 | 1.5037 | 0.4024 |
| 1.6739 | 49.6 | 18500 | 0.8636 | 1.1239 | 0.2789 |
| 1.6457 | 50.94 | 19000 | 0.8516 | 1.2269 | 0.3104 |
| 1.5847 | 52.28 | 19500 | 0.8399 | 1.3309 | 0.3360 |
| 1.5971 | 53.62 | 20000 | 0.8441 | 1.3153 | 0.3335 |
| 1.602 | 54.96 | 20500 | 0.8590 | 1.2932 | 0.3433 |
| 1.5063 | 56.3 | 21000 | 0.8334 | 1.1312 | 0.2875 |
| 1.4631 | 57.64 | 21500 | 0.8474 | 1.1698 | 0.2999 |
| 1.4997 | 58.98 | 22000 | 0.8638 | 1.4279 | 0.3854 |
| 1.4301 | 60.32 | 22500 | 0.8550 | 1.2737 | 0.3300 |
| 1.3798 | 61.66 | 23000 | 0.8266 | 1.1802 | 0.2934 |
| 1.3454 | 63.0 | 23500 | 0.8235 | 1.3816 | 0.3711 |
| 1.3678 | 64.34 | 24000 | 0.8550 | 1.6427 | 0.5035 |
| 1.3761 | 65.68 | 24500 | 0.8510 | 1.6709 | 0.4907 |
| 1.2668 | 67.02 | 25000 | 0.8515 | 1.5842 | 0.4505 |
| 1.2835 | 68.36 | 25500 | 0.8283 | 1.5353 | 0.4221 |
| 1.2961 | 69.7 | 26000 | 0.8339 | 1.5743 | 0.4369 |
| 1.2656 | 71.05 | 26500 | 0.8331 | 1.5331 | 0.4217 |
| 1.2556 | 72.39 | 27000 | 0.8242 | 1.4708 | 0.4109 |
| 1.2043 | 73.73 | 27500 | 0.8245 | 1.4469 | 0.4031 |
| 1.2722 | 75.07 | 28000 | 0.8202 | 1.4924 | 0.4096 |
| 1.202 | 76.41 | 28500 | 0.8290 | 1.3807 | 0.3719 |
| 1.1679 | 77.75 | 29000 | 0.8195 | 1.4097 | 0.3749 |
| 1.1967 | 79.09 | 29500 | 0.8059 | 1.2074 | 0.3077 |
| 1.1241 | 80.43 | 30000 | 0.8137 | 1.2451 | 0.3270 |
| 1.1414 | 81.77 | 30500 | 0.8117 | 1.2031 | 0.3121 |
| 1.132 | 83.11 | 31000 | 0.8234 | 1.4266 | 0.3901 |
| 1.0982 | 84.45 | 31500 | 0.8064 | 1.3712 | 0.3607 |
| 1.0797 | 85.79 | 32000 | 0.8167 | 1.3356 | 0.3562 |
| 1.0119 | 87.13 | 32500 | 0.8215 | 1.2754 | 0.3268 |
| 1.0216 | 88.47 | 33000 | 0.8163 | 1.2512 | 0.3184 |
| 1.0375 | 89.81 | 33500 | 0.8137 | 1.2685 | 0.3290 |
| 0.9794 | 91.15 | 34000 | 0.8220 | 1.2724 | 0.3255 |
| 1.0207 | 92.49 | 34500 | 0.8165 | 1.2906 | 0.3361 |
| 1.0169 | 93.83 | 35000 | 0.8153 | 1.2819 | 0.3305 |
| 1.0127 | 95.17 | 35500 | 0.8187 | 1.2832 | 0.3252 |
| 0.9978 | 96.51 | 36000 | 0.8111 | 1.2612 | 0.3210 |
| 0.9923 | 97.85 | 36500 | 0.8076 | 1.2278 | 0.3122 |
| 1.0451 | 99.2 | 37000 | 0.8086 | 1.2451 | 0.3156 |
## Disclaimer
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
## Authors
Wav2Vec2 XLS-R 300M Cantonese (zh-HK) was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on OVH Cloud.
## Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
cahya/wav2vec2-luganda
|
cahya
| 2022-03-23T18:27:18Z | 27 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"common_voice",
"hf-asr-leaderboard",
"lg",
"robust-speech-event",
"speech",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: lg
datasets:
- mozilla-foundation/common_voice_7_0
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- common_voice
- hf-asr-leaderboard
- lg
- robust-speech-event
- speech
license: apache-2.0
model-index:
- name: Wav2Vec2 Luganda by Indonesian-NLP
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice lg
type: common_voice
args: lg
metrics:
- name: Test WER
type: wer
value: 9.332
- name: Test CER
type: cer
value: 1.987
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: lg
metrics:
- name: Test WER
type: wer
value: 13.844
- name: Test CER
type: cer
value: 2.68
---
# Automatic Speech Recognition for Luganda
This is the model built for the
[Mozilla Luganda Automatic Speech Recognition competition](https://zindi.africa/competitions/mozilla-luganda-automatic-speech-recognition).
It is a fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Luganda Common Voice dataset](https://huggingface.co/datasets/common_voice) version 7.0.
We also provide a [live demo](https://huggingface.co/spaces/indonesian-nlp/luganda-asr) to test the model.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lg", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-luganda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
if "audio" in batch:
speech_array = torch.tensor(batch["audio"]["array"])
else:
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "lg", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model.to("cuda")
chars_to_ignore = [",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "‘", "’", "’"]
chars_to_ignore_regex = f'[{"".join(chars_to_ignore)}]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
if "audio" in batch:
speech_array = torch.tensor(batch["audio"]["array"])
else:
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
WER without KenLM: 15.38 %
WER With KenLM:
**Test Result**: 7.53 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/indonesian-nlp/luganda-asr)
|
DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1
|
DrishtiSharma
| 2022-03-23T18:27:15Z | 12 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"bg",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- bg
license: apache-2.0
tags:
- automatic-speech-recognition
- bg
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-bg-v1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: bg
metrics:
- name: Test WER
type: wer
value: 0.4709579127785184
- name: Test CER
type: cer
value: 0.10205125354383235
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: bg
metrics:
- name: Test WER
type: wer
value: 0.7053128872366791
- name: Test CER
type: cer
value: 0.210804311998487
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: bg
metrics:
- name: Test WER
type: wer
value: 72.6
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5197
- Wer: 0.4689
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 --dataset mozilla-foundation/common_voice_8_0 --config bg --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 --dataset speech-recognition-community-v2/dev_data --config bg --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.3711 | 2.61 | 300 | 4.3122 | 1.0 |
| 3.1653 | 5.22 | 600 | 3.1156 | 1.0 |
| 2.8904 | 7.83 | 900 | 2.8421 | 0.9918 |
| 0.9207 | 10.43 | 1200 | 0.9895 | 0.8689 |
| 0.6384 | 13.04 | 1500 | 0.6994 | 0.7700 |
| 0.5215 | 15.65 | 1800 | 0.5628 | 0.6443 |
| 0.4573 | 18.26 | 2100 | 0.5316 | 0.6174 |
| 0.3875 | 20.87 | 2400 | 0.4932 | 0.5779 |
| 0.3562 | 23.48 | 2700 | 0.4972 | 0.5475 |
| 0.3218 | 26.09 | 3000 | 0.4895 | 0.5219 |
| 0.2954 | 28.7 | 3300 | 0.5226 | 0.5192 |
| 0.287 | 31.3 | 3600 | 0.4957 | 0.5146 |
| 0.2587 | 33.91 | 3900 | 0.4944 | 0.4893 |
| 0.2496 | 36.52 | 4200 | 0.4976 | 0.4895 |
| 0.2365 | 39.13 | 4500 | 0.5185 | 0.4819 |
| 0.2264 | 41.74 | 4800 | 0.5152 | 0.4776 |
| 0.2224 | 44.35 | 5100 | 0.5031 | 0.4746 |
| 0.2096 | 46.96 | 5400 | 0.5062 | 0.4708 |
| 0.2038 | 49.57 | 5700 | 0.5217 | 0.4698 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
patrickvonplaten/xls-r-300-sv-cv7
|
patrickvonplaten
| 2022-03-23T18:27:10Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"sv",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
- sv
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Swedish - CV7 - v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 15.99
- name: Test CER
type: cer
value: 5.2
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 24.41
- name: Test CER
type: cer
value: 11.88
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2604
- Wer: 0.2334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 1
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
See Tensorboard
### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id patrickvonplaten/xls-r-300-sv-cv7 --dataset mozilla-foundation/common_voice_7_0 --config sv-SE --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id patrickvonplaten/xls-r-300-sv-cv7 --dataset speech-recognition-community-v2/dev_data --config sv --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.18.4.dev0
- Tokenizers 0.10.3
|
infinitejoy/wav2vec2-large-xls-r-300m-abkhaz-cv8
|
infinitejoy
| 2022-03-23T18:27:00Z | 8 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ab",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ab
license: apache-2.0
tags:
- ab
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Abkhaz
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ab
metrics:
- name: Test WER
type: wer
value: 27.6
- name: Test CER
type: cer
value: 4.577
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-abkhaz-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1614
- Wer: 0.2907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.2881 | 4.26 | 4000 | 0.3764 | 0.6461 |
| 1.0767 | 8.53 | 8000 | 0.2657 | 0.5164 |
| 0.9841 | 12.79 | 12000 | 0.2330 | 0.4445 |
| 0.9274 | 17.06 | 16000 | 0.2134 | 0.3929 |
| 0.8781 | 21.32 | 20000 | 0.1945 | 0.3886 |
| 0.8381 | 25.59 | 24000 | 0.1840 | 0.3737 |
| 0.8054 | 29.85 | 28000 | 0.1756 | 0.3523 |
| 0.7763 | 34.12 | 32000 | 0.1745 | 0.3299 |
| 0.7474 | 38.38 | 36000 | 0.1677 | 0.3074 |
| 0.7298 | 42.64 | 40000 | 0.1649 | 0.2963 |
| 0.7125 | 46.91 | 44000 | 0.1617 | 0.2931 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-odia
|
infinitejoy
| 2022-03-23T18:26:57Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"or",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- or
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- or
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Odia
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: or
metrics:
- name: Test WER
type: wer
value: 97.91
- name: Test CER
type: cer
value: 247.09
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-odia
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - OR dataset.
It achieves the following results on the evaluation set:
```
python eval.py --model_id ./ --dataset mozilla-foundation/common_voice_7_0 --config as --split test --log_outputs
```
- WER: 1.0921052631578947
- CER: 2.5547945205479454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
Training machine details
- Platform: Linux-5.11.0-37-generic-x86_64-with-glibc2.10
- CPU cores: 60
- Python version: 3.8.8
- PyTorch version: 1.10.1+cu102
- GPU is visible: True
- Transformers version: 4.16.0.dev0
- Datasets version: 1.17.1.dev0
- soundfile version: 0.10.3
Training script
```bash
python run_speech_recognition_ctc.py \
--dataset_name="mozilla-foundation/common_voice_7_0" \
--model_name_or_path="facebook/wav2vec2-xls-r-300m" \
--dataset_config_name="or" \
--output_dir="./wav2vec2-large-xls-r-300m-odia" \
--overwrite_output_dir \
--num_train_epochs="120" \
--per_device_train_batch_size="16" \
--per_device_eval_batch_size="16" \
--gradient_accumulation_steps="2" \
--learning_rate="7.5e-5" \
--warmup_steps="500" \
--length_column_name="input_length" \
--evaluation_strategy="steps" \
--text_column_name="sentence" \
--chars_to_ignore , ? . ! \- \; \: \" “ % ‘ ” � — \’ … \– \' \’ \– \
--save_steps="500" \
--eval_steps="500" \
--logging_steps="100" \
--layerdrop="0.0" \
--activation_dropout="0.1" \
--save_total_limit="3" \
--freeze_feature_encoder \
--feat_proj_dropout="0.0" \
--mask_time_prob="0.75" \
--mask_time_length="10" \
--mask_feature_prob="0.25" \
--mask_feature_length="64" \
--gradient_checkpointing \
--use_auth_token \
--fp16 \
--group_by_length \
--do_train --do_eval \
--push_to_hub
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 120.0
- mixed_precision_training: Native AMP
### Training results
| | eval_loss | eval_wer | eval_runtime | eval_samples_per_second | eval_steps_per_second | epoch |
|---:|------------:|-----------:|---------------:|--------------------------:|------------------------:|--------:|
| 0 | 3.35224 | 0.998972 | 5.0475 | 22.189 | 1.387 | 29.41 |
| 1 | 1.33679 | 0.938335 | 5.0633 | 22.12 | 1.382 | 58.82 |
| 2 | 0.737202 | 0.957862 | 5.0913 | 21.998 | 1.375 | 88.24 |
| 3 | 0.658212 | 0.96814 | 5.0953 | 21.981 | 1.374 | 117.65 |
| 4 | 0.658 | 0.9712 | 5.0953 | 22.115 | 1.382 | 120 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
anuragshas/wav2vec2-large-xls-r-300m-bg
|
anuragshas
| 2022-03-23T18:26:55Z | 228 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"bg",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- bg
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Bulgarian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: bg
metrics:
- name: Test WER
type: wer
value: 21.195
- name: Test CER
type: cer
value: 4.786
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: bg
metrics:
- name: Test WER
type: wer
value: 32.667
- name: Test CER
type: cer
value: 12.452
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: bg
metrics:
- name: Test WER
type: wer
value: 31.03
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Bulgarian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2473
- Wer: 0.3002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1589 | 3.48 | 400 | 3.0830 | 1.0 |
| 2.8921 | 6.96 | 800 | 2.6605 | 0.9982 |
| 1.3049 | 10.43 | 1200 | 0.5069 | 0.5707 |
| 1.1349 | 13.91 | 1600 | 0.4159 | 0.5041 |
| 1.0686 | 17.39 | 2000 | 0.3815 | 0.4746 |
| 0.999 | 20.87 | 2400 | 0.3541 | 0.4343 |
| 0.945 | 24.35 | 2800 | 0.3266 | 0.4132 |
| 0.9058 | 27.83 | 3200 | 0.2969 | 0.3771 |
| 0.8672 | 31.3 | 3600 | 0.2802 | 0.3553 |
| 0.8313 | 34.78 | 4000 | 0.2662 | 0.3380 |
| 0.8068 | 38.26 | 4400 | 0.2528 | 0.3181 |
| 0.7796 | 41.74 | 4800 | 0.2537 | 0.3073 |
| 0.7621 | 45.22 | 5200 | 0.2503 | 0.3036 |
| 0.7611 | 48.7 | 5600 | 0.2477 | 0.2991 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-bg --dataset mozilla-foundation/common_voice_8_0 --config bg --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-bg --dataset speech-recognition-community-v2/dev_data --config bg --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-bg"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "bg", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "и надутият му ката блоонкурем взе да се събира"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 30.07 | 21.195 |
|
mpoyraz/wav2vec2-xls-r-300m-cv6-turkish
|
mpoyraz
| 2022-03-23T18:26:27Z | 9 | 7 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"hf-asr-leaderboard",
"robust-speech-event",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language: tr
tags:
- automatic-speech-recognition
- common_voice
- hf-asr-leaderboard
- robust-speech-event
- tr
datasets:
- common_voice
model-index:
- name: mpoyraz/wav2vec2-xls-r-300m-cv6-turkish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6.1
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 8.83
- name: Test CER
type: cer
value: 2.37
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: tr
metrics:
- name: Test WER
type: wer
value: 32.81
- name: Test CER
type: cer
value: 11.22
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: tr
metrics:
- name: Test WER
type: wer
value: 34.86
---
# wav2vec2-xls-r-300m-cv6-turkish
## Model description
This ASR model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Turkish language.
## Training and evaluation data
The following datasets were used for finetuning:
- [Common Voice 6.1 TR](https://huggingface.co/datasets/common_voice) All `validated` split except `test` split was used for training.
- [MediaSpeech](https://www.openslr.org/108/)
## Training procedure
To support both of the datasets above, custom pre-processing and loading steps was performed and [wav2vec2-turkish](https://github.com/mpoyraz/wav2vec2-turkish) repo was used for that purpose.
### Training hyperparameters
The following hypermaters were used for finetuning:
- learning_rate 2e-4
- num_train_epochs 10
- warmup_steps 500
- freeze_feature_extractor
- mask_time_prob 0.1
- mask_feature_prob 0.1
- feat_proj_dropout 0.05
- attention_dropout 0.05
- final_dropout 0.1
- activation_dropout 0.05
- per_device_train_batch_size 8
- per_device_eval_batch_size 8
- gradient_accumulation_steps 8
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1
- Datasets 1.18.3
- Tokenizers 0.10.3
## Language Model
N-gram language model is trained on a Turkish Wikipedia articles using KenLM and [ngram-lm-wiki](https://github.com/mpoyraz/ngram-lm-wiki) repo was used to generate arpa LM and convert it into binary format.
## Evaluation Commands
Please install [unicode_tr](https://pypi.org/project/unicode_tr/) package before running evaluation. It is used for Turkish text processing.
1. To evaluate on `common_voice` with split `test`
```bash
python eval.py --model_id mpoyraz/wav2vec2-xls-r-300m-cv6-turkish --dataset common_voice --config tr --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id mpoyraz/wav2vec2-xls-r-300m-cv6-turkish --dataset speech-recognition-community-v2/dev_data --config tr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Evaluation results:
| Dataset | WER | CER |
|---|---|---|
|Common Voice 6.1 TR test split| 8.83 | 2.37 |
|Speech Recognition Community dev data| 32.81 | 11.22 |
|
cahya/wav2vec2-base-turkish
|
cahya
| 2022-03-23T18:26:22Z | 57 | 4 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"tr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
- tr
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: Wav2Vec2 Base Turkish by Cahya
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6.1
type: mozilla-foundation/common_voice_7_0
args: tr
metrics:
- name: Test WER
type: wer
value: 9.437
- name: Test CER
type: cer
value: 3.325
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: tr
metrics:
- name: Test WER
type: wer
value: 8.147
- name: Test CER
type: cer
value: 2.802
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: tr
metrics:
- name: Test WER
type: wer
value: 28.011
- name: Test CER
type: cer
value: 10.66
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: tr
metrics:
- name: Test WER
type: wer
value: 33.62
---
#
This model is a fine-tuned version of [cahya/wav2vec2-base-turkish-artificial-cv](https://huggingface.co/cahya/wav2vec2-base-turkish-artificial-cv) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
| | Dataset | WER | CER |
|---|-------------------------------|---------|----------|
| 1 | Common Voice 6.1 | 9.437 | 3.325 |
| 2 | Common Voice 7.0 | 8.147 | 2.802 |
| 3 | Common Voice 8.0 | 8.335 | 2.336 |
| 4 | Speech Recognition Community | 28.011 | 10.66 |
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
The following datasets were used for finetuning:
- [Common Voice 7.0 TR](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) 'train', 'validation' and 'other' split were used for training.
- [Media Speech](https://www.openslr.org/108/)
- [Magic Hub](https://magichub.com/)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-06
- train_batch_size: 6
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1224 | 3.45 | 500 | 0.1641 | 0.1396 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
huggingtweets/lucca_dev
|
huggingtweets
| 2022-03-23T18:20:26Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T18:07:47Z |
---
language: en
thumbnail: http://www.huggingtweets.com/lucca_dev/1648059357338/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1475818681628246021/sf4z2j_9_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lucca</div>
<div style="text-align: center; font-size: 14px;">@lucca_dev</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lucca.
| Data | Lucca |
| --- | --- |
| Tweets downloaded | 2525 |
| Retweets | 17 |
| Short tweets | 100 |
| Tweets kept | 2408 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3bq4zgob/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lucca_dev's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2kuasht1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2kuasht1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lucca_dev')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
shahrukhx01/gbert-hasoc-german-2019
|
shahrukhx01
| 2022-03-23T18:18:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"hate-speech-classification",
"de",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-23T17:41:04Z |
---
language: "de"
tags:
- hate-speech-classification
widget:
- text: "Das ist der absolute Gipfel! Lächerliche 2,5 Jahre Haft für einen extremst sadistischen Mord. Ich fasse es nicht. Das sitzt der Killer auf der linken Arschbacke ab und lacht sich dabei kaputt. Unsere Justiz ist nur noch zum Kotzen."
- text: "Das ist der absolute Gipfel! Lächerliche 2,5 Jahre Haft für einen extremst sadistischen Mord. Ich fasse es nicht. Das sitzt der Killer auf der linken Arschbacke ab und lacht sich dabei kaputt. Unsere Justiz ist nur noch zum Kotzen."
---
# Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("shahrukhx01/gbert-hasoc-german-2019")
model = AutoModelForSequenceClassification.from_pretrained("shahrukhx01/gbert-hasoc-german-2019")
```
# Dataset
```bibtext
@inproceedings{10.1145/3368567.3368584,
author = {Mandl, Thomas and Modha, Sandip and Majumder, Prasenjit and Patel, Daksh and Dave, Mohana and Mandlia, Chintak and Patel, Aditya},
title = {Overview of the HASOC Track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages},
year = {2019},
isbn = {9781450377508},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3368567.3368584},
doi = {10.1145/3368567.3368584},
abstract = {The identification of Hate Speech in Social Media is of great importance and receives much attention in the text classification community. There is a huge demand for research for languages other than English. The HASOC track intends to stimulate development in Hate Speech for Hindi, German and English. Three datasets were developed from Twitter and Facebook and made available. Binary classification and more fine-grained subclasses were offered in 3 subtasks. For all subtasks, 321 experiments were submitted. The approaches used most often were LSTM networks processing word embedding input. The performance of the best system for identification of Hate Speech for English, Hindi, and German was a Marco-F1 score of 0.78, 0.81 and 0.61, respectively.},
booktitle = {Proceedings of the 11th Forum for Information Retrieval Evaluation},
pages = {14–17},
numpages = {4},
keywords = {Text Classification, Hate Speech, Evaluation, Deep Learning},
location = {Kolkata, India},
series = {FIRE '19}
}
```
---
license: mit
---
|
huggingtweets/metakuna
|
huggingtweets
| 2022-03-23T17:48:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T17:35:38Z |
---
language: en
thumbnail: http://www.huggingtweets.com/metakuna/1648057688512/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1493720826935398408/hB4ndxdj_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">metakuna (8/100 blog posts)</div>
<div style="text-align: center; font-size: 14px;">@metakuna</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from metakuna (8/100 blog posts).
| Data | metakuna (8/100 blog posts) |
| --- | --- |
| Tweets downloaded | 3235 |
| Retweets | 242 |
| Short tweets | 524 |
| Tweets kept | 2469 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/9uv1luph/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @metakuna's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1k1mb79h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1k1mb79h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/metakuna')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
muhammedshihebi/bert-base-multilingual-cased-squad
|
muhammedshihebi
| 2022-03-23T17:48:47Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-23T17:48:32Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-base-multilingual-cased-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-squad
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5271
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18600, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.1256 | 0 |
| 0.7252 | 1 |
| 0.5271 | 2 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/pierreavdb
|
huggingtweets
| 2022-03-23T16:50:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T16:43:47Z |
---
language: en
thumbnail: http://www.huggingtweets.com/pierreavdb/1648054135143/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1479780096483512323/LmKFSR3X_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pierre</div>
<div style="text-align: center; font-size: 14px;">@pierreavdb</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pierre.
| Data | Pierre |
| --- | --- |
| Tweets downloaded | 1064 |
| Retweets | 172 |
| Short tweets | 133 |
| Tweets kept | 759 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/21bimkjn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pierreavdb's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ji40nkbv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ji40nkbv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pierreavdb')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.