modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
aditeyabaral/distilbert-hinglish-small | beafbb09337e24a069f3c83013bcd085efa97a73 | 2021-10-11T18:22:44.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | aditeyabaral | null | aditeyabaral/distilbert-hinglish-small | 2 | null | transformers | 23,600 | Entry not found |
aditeyabaral/sentencetransformer-bert-base-cased | 48d1c8f4623cabe13269c126cdebaf27a65fdeb5 | 2021-10-21T09:50:09.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | aditeyabaral | null | aditeyabaral/sentencetransformer-bert-base-cased | 2 | null | sentence-transformers | 23,601 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-bert-base-cased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-bert-base-cased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-bert-base-cased')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-bert-base-cased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-bert-base-cased)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
aditeyabaral/sentencetransformer-distilbert-base-cased | 486771141a031b9c62691b1ed03e901358b3d6e6 | 2021-10-21T22:30:29.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | aditeyabaral | null | aditeyabaral/sentencetransformer-distilbert-base-cased | 2 | null | sentence-transformers | 23,602 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-distilbert-base-cased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-distilbert-base-cased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-distilbert-base-cased')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-distilbert-base-cased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-distilbert-base-cased)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
aditeyabaral/sentencetransformer-roberta-base | c18c7bd01ab0720db0b86c80c3b7209e134301a9 | 2021-10-21T18:03:26.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | aditeyabaral | null | aditeyabaral/sentencetransformer-roberta-base | 2 | null | sentence-transformers | 23,603 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-roberta-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-roberta-base')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-roberta-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-roberta-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
aditi2222/paragus_models | d0403754d7565373102e26d9e2da61b10b24f701 | 2021-11-30T08:46:57.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | aditi2222 | null | aditi2222/paragus_models | 2 | null | transformers | 23,604 | Entry not found |
ahanadeb/wav2vec2-large-indian-instrument-emotion-classification-v1 | 0aa75595afb3bbcb19c391429c604eb08b7d00f0 | 2021-11-13T16:13:45.000Z | [
"pytorch",
"wav2vec2",
"transformers"
] | null | false | ahanadeb | null | ahanadeb/wav2vec2-large-indian-instrument-emotion-classification-v1 | 2 | null | transformers | 23,605 | Entry not found |
ahmedattia143/roberta_squadv1_base | 455c5064bdbba95aff3d578cdd33deebe9f1d39e | 2021-05-30T11:42:11.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ahmedattia143 | null | ahmedattia143/roberta_squadv1_base | 2 | null | transformers | 23,606 | Entry not found |
ainize/gpt2-rnm-with-only-rick | fdbb94fe3ba36778bfe1b8e0867e74aed9583f35 | 2021-05-21T12:06:44.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ainize | null | ainize/gpt2-rnm-with-only-rick | 2 | null | transformers | 23,607 | ### Model information
Fine tuning data 1: https://www.kaggle.com/andradaolteanu/rickmorty-scripts
Base model: e-tony/gpt2-rnm
Epoch: 1
Train runtime: 3.4982 secs
Loss: 3.0894
Training notebook: [Colab](https://colab.research.google.com/drive/1RawVxulLETFicWMY0YANUdP-H-e7Eeyc)
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
|
airKlizz/distilbart-12-6-multi-combine-wiki-news | 4958a58dff152ce70ea10168ed4668fb92f4c26f | 2020-08-21T07:35:00.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | airKlizz | null | airKlizz/distilbart-12-6-multi-combine-wiki-news | 2 | null | transformers | 23,608 | Entry not found |
airKlizz/mt5-base-germeval21-toxic | 6756768841d413e65a19184a471c70d88023fcb9 | 2021-07-12T15:40:06.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | airKlizz | null | airKlizz/mt5-base-germeval21-toxic | 2 | null | transformers | 23,609 | Entry not found |
airKlizz/mt5-small-wikinewssum-test | f1ba5ce743f8b3ac67fda44ea1e418b123fba939 | 2021-12-16T16:18:08.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | airKlizz | null | airKlizz/mt5-small-wikinewssum-test | 2 | null | transformers | 23,610 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-wikinewssum-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-wikinewssum-test
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9354
- Rouge1: 6.8433
- Rouge2: 2.5498
- Rougel: 5.6114
- Rougelsum: 6.353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 661 | 3.2810 | 6.4161 | 2.403 | 5.3674 | 6.0329 |
| No log | 2.0 | 1322 | 3.1515 | 6.9291 | 2.6826 | 5.6839 | 6.4359 |
| No log | 3.0 | 1983 | 3.0565 | 6.7939 | 2.6113 | 5.6133 | 6.3126 |
| No log | 4.0 | 2644 | 2.9815 | 6.0279 | 2.1637 | 4.9892 | 5.5962 |
| No log | 5.0 | 3305 | 2.9645 | 6.3926 | 2.339 | 5.2716 | 5.9443 |
| 3.9937 | 6.0 | 3966 | 2.9476 | 6.4739 | 2.3615 | 5.3473 | 6.0089 |
| 3.9937 | 7.0 | 4627 | 2.9405 | 6.615 | 2.4309 | 5.4493 | 6.1445 |
| 3.9937 | 8.0 | 5288 | 2.9354 | 6.8433 | 2.5498 | 5.6114 | 6.353 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
airesearchth/wangchanberta-base-wiki-20210520-news-spm | fd4c28e90832c3b1450e7480bcc253f34c26b151 | 2021-07-16T00:22:43.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | airesearchth | null | airesearchth/wangchanberta-base-wiki-20210520-news-spm | 2 | null | transformers | 23,611 | Entry not found |
airesearchth/wangchanberta-base-wiki-20210520-spm | 07b9e704858a480a37c8fd770cd2474e46cafe67 | 2021-05-31T22:49:34.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | airesearchth | null | airesearchth/wangchanberta-base-wiki-20210520-spm | 2 | null | transformers | 23,612 | Entry not found |
ajanco/yi_roberta_oscar | 2204b607be641220bc8fe0ab168d563914d8a671 | 2022-01-18T03:14:14.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ajanco | null | ajanco/yi_roberta_oscar | 2 | null | transformers | 23,613 | Entry not found |
akadriu/wav2vec2-large-xlsr-53-AL-colab | 1178d64741428f7897161f42e3ec3c6382d95ada | 2022-01-20T16:01:15.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | akadriu | null | akadriu/wav2vec2-large-xlsr-53-AL-colab | 2 | null | transformers | 23,614 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53-AL-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-AL-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5358
- Wer: 0.5443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9391 | 0.4 | 400 | 2.0722 | 0.9249 |
| 0.8775 | 0.8 | 800 | 1.7171 | 0.6778 |
| 0.665 | 1.2 | 1200 | 1.7250 | 0.6235 |
| 0.6135 | 1.6 | 1600 | 1.4021 | 0.5847 |
| 0.5795 | 2.0 | 2000 | 1.6191 | 0.5696 |
| 0.5031 | 2.4 | 2400 | 1.6767 | 0.5586 |
| 0.4933 | 2.8 | 2800 | 1.5358 | 0.5443 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
akadriu/wav2vec2-large-xlsr-53-AL | 31235b9024117950df586df887fb51a50c1871cb | 2022-02-17T00:15:49.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | akadriu | null | akadriu/wav2vec2-large-xlsr-53-AL | 2 | null | transformers | 23,615 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53-AL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-AL
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2712
- Wer: 0.6940
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.073 | 8.0 | 200 | 1.0990 | 0.7002 |
| 0.0561 | 16.0 | 400 | 1.1455 | 0.6805 |
| 0.0378 | 24.0 | 600 | 1.2712 | 0.6940 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
akahana/indonesia-roberta-small | 023c29690af6b3e7ef56908c7fb357150676cd8d | 2021-12-08T04:51:44.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | akahana | null | akahana/indonesia-roberta-small | 2 | null | transformers | 23,616 | Entry not found |
akhooli/gpt2-ar-poetry-aub_m | 1cdcf9c07eff565937f6bdeb2e290693daf16d47 | 2021-05-21T12:29:23.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | akhooli | null | akhooli/gpt2-ar-poetry-aub_m | 2 | null | transformers | 23,617 | Entry not found |
akr/distilbert-base-uncased-finetuned-squad | d8470f63912a1e632e76664beed7a20cedeb7bf8 | 2021-10-12T10:39:46.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | akr | null | akr/distilbert-base-uncased-finetuned-squad | 2 | null | transformers | 23,618 | Entry not found |
akshaychaudhary/distilbert-base-uncased-finetuned-cloud2-ner | 8271c60ade6c53a51ca88fd97251f0175250c6fb | 2022-02-14T17:33:18.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | akshaychaudhary | null | akshaychaudhary/distilbert-base-uncased-finetuned-cloud2-ner | 2 | null | transformers | 23,619 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-cloud2-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cloud2-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8866
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.8453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 162 | 0.7804 | 0.0 | 0.0 | 0.0 | 0.8447 |
| No log | 2.0 | 324 | 0.8303 | 0.0 | 0.0 | 0.0 | 0.8465 |
| No log | 3.0 | 486 | 0.8866 | 0.0 | 0.0 | 0.0 | 0.8453 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
alenusch/par_cls_bert | 6c87f5103e5b37c457acf1f57686664deecbb321 | 2021-06-25T12:20:42.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | alenusch | null | alenusch/par_cls_bert | 2 | null | transformers | 23,620 | ## Classifier to check if two sequences are paraphrase or not
Trained based on ruBert by DeepPavlov.
Use this way:
```
import torch
import torch.nn as nn
import os
import copy
import random
import numpy as np
import pandas as pd
from torch.utils.data import DataLoader, Dataset
from torch.cuda.amp import autocast, GradScaler
from tqdm import tqdm
from transformers import AutoTokenizer, AutoModel, AdamW, get_linear_schedule_with_warmup
from transformers.file_utils import (
cached_path,
hf_bucket_url,
is_remote_url,
)
archive_file = hf_bucket_url(
"alenusch/par_cls_bert",
filename="rubert-base-cased_lr_2e-05_val_loss_0.66143_ep_4.pt",
revision=None,
mirror=None,
)
resolved_archive_file = cached_path(
archive_file,
cache_dir=None,
force_download=False,
proxies=None,
resume_download=False,
local_files_only=False,
)
os.environ["TOKENIZERS_PARALLELISM"] = "false"
class SentencePairClassifier(nn.Module):
def __init__(self, bert_model):
super(SentencePairClassifier, self).__init__()
self.bert_layer = AutoModel.from_pretrained(bert_model)
self.cls_layer = nn.Linear(768, 1)
self.dropout = nn.Dropout(p=0.1)
@autocast()
def forward(self, input_ids, attn_masks, token_type_ids):
cont_reps, pooler_output = self.bert_layer(input_ids, attn_masks, token_type_ids, return_dict=False)
logits = self.cls_layer(self.dropout(pooler_output))
return logits
class CustomDataset(Dataset):
def __init__(self, data, maxlen, bert_model):
self.data = data
self.tokenizer = AutoTokenizer.from_pretrained(bert_model)
self.maxlen = maxlen
self.targets = False
def __len__(self):
return len(self.data)
def __getitem__(self, index):
sent1 = str(self.data[index][0])
sent2 = str(self.data[index][1])
encoded_pair = self.tokenizer(sent1, sent2,
padding='max_length', # Pad to max_length
truncation=True, # Truncate to max_length
max_length=self.maxlen,
return_tensors='pt') # Return torch.Tensor objects
token_ids = encoded_pair['input_ids'].squeeze(0) # tensor of token ids
attn_masks = encoded_pair['attention_mask'].squeeze(0) # binary tensor with "0" for padded values and "1" for the other values
token_type_ids = encoded_pair['token_type_ids'].squeeze(0) # binary tensor with "0" for the 1st sentence tokens & "1" for the 2nd sentence tokens
return token_ids, attn_masks, token_type_ids
def get_probs_from_logits(logits):
probs = torch.sigmoid(logits.unsqueeze(-1))
return probs.detach().cpu().numpy()
def test_prediction(net, device, dataloader, with_labels=False):
net.eval()
probs_all = []
with torch.no_grad():
for seq, attn_masks, token_type_ids in tqdm(dataloader):
seq, attn_masks, token_type_ids = seq.to(device), attn_masks.to(device), token_type_ids.to(device)
logits = net(seq, attn_masks, token_type_ids)
probs = get_probs_from_logits(logits.squeeze(-1)).squeeze(-1)
probs_all += probs.tolist()
return probs_all
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
cls_model = SentencePairClassifier(bert_model="alenusch/par_cls_bert")
if torch.cuda.device_count() > 1:
cls_model = nn.DataParallel(model)
cls_model.load_state_dict(torch.load(resolved_archive_file))
cls_model.to(device)
variants = [["sentence1", "sentence2"]]
test_set = CustomDataset(variants, maxlen=512, bert_model="alenusch/par_cls_bert")
test_loader = DataLoader(test_set, batch_size=16, num_workers=5)
res = test_prediction(net=cls_model, device=device, dataloader=test_loader, with_labels=False)
``` |
alex6095/SanctiMoly-Bart | c5c5d2e12a6af357517e9eedbd751baa5a0569d8 | 2021-12-12T21:44:57.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alex6095 | null | alex6095/SanctiMoly-Bart | 2 | null | transformers | 23,621 | Entry not found |
alex6095/SanctiMolyOH_Cpu | 2e944455aa4f44cb50115bcd46d2f6b9e62cb145 | 2021-12-13T01:25:55.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | alex6095 | null | alex6095/SanctiMolyOH_Cpu | 2 | null | transformers | 23,622 | alex6095/SanctiMolyOH_Cpu |
alexaapo/greek_legal_bert_v1 | 9cbd5f6be8b2ab598592052043fbfa3087062945 | 2021-12-01T11:00:04.000Z | [
"pytorch",
"transformers"
] | null | false | alexaapo | null | alexaapo/greek_legal_bert_v1 | 2 | null | transformers | 23,623 | Entry not found |
alexcruz0202/t5_boolq | 41c8ebc056e561db08f2f161e46553052980f7da | 2021-06-23T11:06:38.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alexcruz0202 | null | alexcruz0202/t5_boolq | 2 | null | transformers | 23,624 | t5_boolq
|
alexrfelicio/t5-small-finetuned-en-to-de | aebde3d93248055935a5682e9296e63ca39ea100 | 2021-11-30T23:07:35.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | alexrfelicio | null | alexrfelicio/t5-small-finetuned-en-to-de | 2 | null | transformers | 23,625 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: t5-small-finetuned-en-to-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 136 | 1.7446 | 9.0564 | 17.8356 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
alexrfelicio/t5-small-finetuned128-en-to-de | 5892177f8cd7a38ee966f3d3154f0a12d4c9e4b1 | 2021-12-02T21:27:03.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | alexrfelicio | null | alexrfelicio/t5-small-finetuned128-en-to-de | 2 | null | transformers | 23,626 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: t5-small-finetuned128-en-to-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned128-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
alexrfelicio/t5-small-finetuned32-en-to-de | b05c59545ec5f6805c6d73e16e8a76a821c1b8d2 | 2021-12-02T22:39:31.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | alexrfelicio | null | alexrfelicio/t5-small-finetuned32-en-to-de | 2 | null | transformers | 23,627 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: t5-small-finetuned32-en-to-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned32-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 136 | 1.4226 | 21.9554 | 17.8089 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
alexyalunin/my-awesome-model | 4b12f4c1e8e114ee401508c0e0fad98a1f082da2 | 2022-01-24T16:09:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | alexyalunin | null | alexyalunin/my-awesome-model | 2 | null | transformers | 23,628 | # RuBio
for paper: dsdfsfsdf |
algomuffin/dummy | 7aeaf9b87b6f8960b8b60f236ca373e5af6e7a7f | 2021-11-17T10:27:54.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | algomuffin | null | algomuffin/dummy | 2 | null | transformers | 23,629 | Entry not found |
ali2066/finetuned_token_2e-05_16_02_2022-01_30_30 | 848816697b23a8268b3d7712fd3db26575f6f584 | 2022-02-16T00:32:55.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/finetuned_token_2e-05_16_02_2022-01_30_30 | 2 | null | transformers | 23,630 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-01_30_30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-01_30_30
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1748
- Precision: 0.3384
- Recall: 0.3492
- F1: 0.3437
- Accuracy: 0.9442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3180 | 0.0985 | 0.1648 | 0.1233 | 0.8643 |
| No log | 2.0 | 76 | 0.2667 | 0.1962 | 0.2698 | 0.2272 | 0.8926 |
| No log | 3.0 | 114 | 0.2374 | 0.2268 | 0.3005 | 0.2585 | 0.9062 |
| No log | 4.0 | 152 | 0.2305 | 0.2248 | 0.3247 | 0.2657 | 0.9099 |
| No log | 5.0 | 190 | 0.2289 | 0.2322 | 0.3166 | 0.2679 | 0.9102 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_16_02_2022-01_55_54 | 537decb45d1a09398bfeccfb64dd9701eb18fec6 | 2022-02-16T01:18:01.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/finetuned_token_2e-05_16_02_2022-01_55_54 | 2 | null | transformers | 23,631 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-01_55_54
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-01_55_54
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_16_02_2022-14_18_19 | f8c8e8cd6c36588cde75a0a935011317474b1d76 | 2022-02-16T13:20:37.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/finetuned_token_2e-05_16_02_2022-14_18_19 | 2 | null | transformers | 23,632 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-14_18_19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_18_19
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_16_02_2022-14_20_41 | fb9ab591918bd292eace0cd94b59abccbc6b98fd | 2022-02-16T13:23:18.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/finetuned_token_2e-05_16_02_2022-14_20_41 | 2 | null | transformers | 23,633 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-14_20_41
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_20_41
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_16_02_2022-14_32_56 | 411356fcfb56281aca463726c6086ab4aad4b865 | 2022-02-16T13:35:14.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/finetuned_token_2e-05_16_02_2022-14_32_56 | 2 | null | transformers | 23,634 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-14_32_56
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_32_56
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_3e-05_all_16_02_2022-16_19_24 | a5c5d5507e41b434d46900bf80d2c2c15be04e7e | 2022-02-16T15:22:34.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/finetuned_token_3e-05_all_16_02_2022-16_19_24 | 2 | null | transformers | 23,635 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_3e-05_all_16_02_2022-16_19_24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_3e-05_all_16_02_2022-16_19_24
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1630
- Precision: 0.3684
- Recall: 0.3714
- F1: 0.3699
- Accuracy: 0.9482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3339 | 0.1075 | 0.2324 | 0.1470 | 0.8379 |
| No log | 2.0 | 76 | 0.3074 | 0.1589 | 0.2926 | 0.2060 | 0.8489 |
| No log | 3.0 | 114 | 0.2914 | 0.2142 | 0.3278 | 0.2591 | 0.8591 |
| No log | 4.0 | 152 | 0.2983 | 0.1951 | 0.3595 | 0.2529 | 0.8454 |
| No log | 5.0 | 190 | 0.2997 | 0.1851 | 0.3528 | 0.2428 | 0.8487 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_0.0002_all_16_02_2022-20_30_01 | 78813d5e0c3aebbfce270aa69f6b20af054a65ec | 2022-02-16T19:32:19.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/finetuned_token_itr0_0.0002_all_16_02_2022-20_30_01 | 2 | null | transformers | 23,636 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_0.0002_all_16_02_2022-20_30_01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_0.0002_all_16_02_2022-20_30_01
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1577
- Precision: 0.4469
- Recall: 0.5280
- F1: 0.4841
- Accuracy: 0.9513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3553 | 0.1068 | 0.0810 | 0.0922 | 0.8412 |
| No log | 2.0 | 76 | 0.2812 | 0.2790 | 0.4017 | 0.3293 | 0.8684 |
| No log | 3.0 | 114 | 0.2793 | 0.3086 | 0.4586 | 0.3689 | 0.8747 |
| No log | 4.0 | 152 | 0.2766 | 0.3057 | 0.4190 | 0.3535 | 0.8763 |
| No log | 5.0 | 190 | 0.2805 | 0.2699 | 0.4845 | 0.3467 | 0.8793 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_0.0002_all_16_02_2022-20_45_27 | dbcbef0af3f2fd67997b7d412dab3d5ab489b51c | 2022-02-16T19:47:45.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/finetuned_token_itr0_0.0002_all_16_02_2022-20_45_27 | 2 | null | transformers | 23,637 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_0.0002_all_16_02_2022-20_45_27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_0.0002_all_16_02_2022-20_45_27
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1500
- Precision: 0.4739
- Recall: 0.5250
- F1: 0.4981
- Accuracy: 0.9551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3183 | 0.2024 | 0.2909 | 0.2387 | 0.8499 |
| No log | 2.0 | 76 | 0.3092 | 0.2909 | 0.4181 | 0.3431 | 0.8548 |
| No log | 3.0 | 114 | 0.2928 | 0.2923 | 0.4855 | 0.3650 | 0.8647 |
| No log | 4.0 | 152 | 0.3098 | 0.2832 | 0.4605 | 0.3507 | 0.8641 |
| No log | 5.0 | 190 | 0.3120 | 0.2470 | 0.4374 | 0.3157 | 0.8654 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_2e-05_editorials_16_02_2022-21_05_05 | b1774a42cd1ebb7239d97cfb03b3c60b7db84a62 | 2022-02-16T20:06:17.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/finetuned_token_itr0_2e-05_editorials_16_02_2022-21_05_05 | 2 | null | transformers | 23,638 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_2e-05_editorials_16_02_2022-21_05_05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_2e-05_editorials_16_02_2022-21_05_05
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1114
- Precision: 0.0637
- Recall: 0.0080
- F1: 0.0141
- Accuracy: 0.9707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.0921 | 0.08 | 0.0110 | 0.0193 | 0.9801 |
| No log | 2.0 | 30 | 0.0816 | 0.08 | 0.0110 | 0.0193 | 0.9801 |
| No log | 3.0 | 45 | 0.0781 | 0.08 | 0.0110 | 0.0193 | 0.9801 |
| No log | 4.0 | 60 | 0.0746 | 0.08 | 0.0110 | 0.0193 | 0.9801 |
| No log | 5.0 | 75 | 0.0737 | 0.08 | 0.0110 | 0.0193 | 0.9801 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
alina1997/marian_en_de_test | 6c89e736d001cee8e163c83601be8eef36e4faa1 | 2022-02-28T13:31:36.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"de",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | alina1997 | null | alina1997/marian_en_de_test | 2 | null | transformers | 23,639 | ---
language:
- en
- de
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: trained_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trained_model
This model is a fine-tuned version of [opus-mt-en-de](https://huggingface.co/opus-mt-en-de) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4519
- Bleu: 27.6198
- Gen Len: 106.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 3 | 1.4519 | 27.6198 | 106.0 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.8.0
- Datasets 1.18.3
- Tokenizers 0.10.3
|
alireza7/ARMAN-MSR-persian-base-tebyan | 4ce9e5393269dc57c12a0bb9b39645fe16923c5c | 2021-09-29T19:16:58.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-MSR-persian-base-tebyan | 2 | null | transformers | 23,640 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-MSR-persian-base-wiki-summary | f116b1862282392b130a4dd7b64b7f55b67ddc84 | 2021-09-29T19:17:13.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-MSR-persian-base-wiki-summary | 2 | null | transformers | 23,641 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SH-persian-base-parsinlu-qqp | b5f506fafe45c13d70b14441c43a5c477ddeac70 | 2021-09-29T19:18:12.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SH-persian-base-parsinlu-qqp | 2 | null | transformers | 23,642 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SH-persian-base-parsinlu-sentiment-movie | 01dd1df0535125edabb44af0d76a343f1025c4b4 | 2021-09-29T19:18:54.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SH-persian-base-parsinlu-sentiment-movie | 2 | null | transformers | 23,643 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-100-persian-base-PN-summary | eda31842040689a2b5d419770bde879dc4c02c1e | 2021-09-29T19:20:30.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-100-persian-base-PN-summary | 2 | null | transformers | 23,644 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-100-persian-base-parsinlu-sentiment-food | 868a23c9a65ed703b25e51dd6b9c211c0eac6c34 | 2021-09-29T19:20:50.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-100-persian-base-parsinlu-sentiment-food | 2 | null | transformers | 23,645 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-100-persian-base-parsinlu-sentiment-movie | e25783e2545b5dfa4fe61f2286617551ec7e5c18 | 2021-09-29T19:20:57.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-100-persian-base-parsinlu-sentiment-movie | 2 | null | transformers | 23,646 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-100-persian-base-perkey-title | 7dcb61730f1d68a28a8f164b1d957b8dcbee6b09 | 2021-09-29T19:21:19.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-100-persian-base-perkey-title | 2 | null | transformers | 23,647 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-80-persian-base-PN-summary | e9c73a5d3c56f95fba0bc89e4606850eff04771b | 2021-09-29T19:22:43.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-80-persian-base-PN-summary | 2 | null | transformers | 23,648 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/PEGASUS-persian-base-PN-summary | 4416e06275504351c90119cd49f631e2a673a4fd | 2021-09-29T19:25:02.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/PEGASUS-persian-base-PN-summary | 2 | null | transformers | 23,649 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/PEGASUS-persian-base-parsinlu-sentiment-food | c3e53456c9e1e8f3a4eaf763254aa166aa105053 | 2021-09-29T19:25:24.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/PEGASUS-persian-base-parsinlu-sentiment-food | 2 | null | transformers | 23,650 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/PEGASUS-persian-base-perkey-title | ba960c96582a48184e85f98faab5fb50062ecb1d | 2021-09-29T19:25:52.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/PEGASUS-persian-base-perkey-title | 2 | null | transformers | 23,651 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alistvt/bert-base-uncased-pretrain-finetuned-coqa-falt | 1882b55239bf933929fdd9202923e18dba392997 | 2022-01-25T19:05:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | alistvt | null | alistvt/bert-base-uncased-pretrain-finetuned-coqa-falt | 2 | null | transformers | 23,652 | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-pretrain-finetuned-coqa-falt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-pretrain-finetuned-coqa-falt
This model is a fine-tuned version of [alistvt/bert-base-uncased-pretrained-mlm-coqa-stories](https://huggingface.co/alistvt/bert-base-uncased-pretrained-mlm-coqa-stories) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.4039 | 0.29 | 2000 | 3.0921 |
| 3.1438 | 0.59 | 4000 | 2.8826 |
| 3.0252 | 0.88 | 6000 | 2.7885 |
| 2.7112 | 1.18 | 8000 | 2.7720 |
| 2.6703 | 1.47 | 10000 | 2.7581 |
| 2.6432 | 1.77 | 12000 | 2.7316 |
| 2.385 | 2.06 | 14000 | 2.7798 |
| 2.3314 | 2.36 | 16000 | 2.7836 |
| 2.3433 | 2.65 | 18000 | 2.7650 |
| 2.3604 | 2.95 | 20000 | 2.7585 |
| 2.2232 | 3.24 | 22000 | 2.8120 |
| 2.2094 | 3.53 | 24000 | 2.7945 |
| 2.2306 | 3.83 | 26000 | 2.8125 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
alistvt/bert-base-uncased-pretrained-mlm-coqa-stories | 9168ae7f0230ffd72e66654d327fcfc6e1a1787b | 2022-01-21T13:17:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | alistvt | null | alistvt/bert-base-uncased-pretrained-mlm-coqa-stories | 2 | null | transformers | 23,653 | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-pretrained-mlm-coqa-stories
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-pretrained-mlm-coqa-stories
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0573 | 1.0 | 2479 | 1.8805 |
| 1.9517 | 2.0 | 4958 | 1.8377 |
| 1.9048 | 3.0 | 7437 | 1.8310 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
alokmatta/wav2vec2-large-xlsr-53-sw | 125fde65ac78845894cc4b67f57ea21c807ce371 | 2021-07-05T19:12:57.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"sw",
"dataset:ALFFA,Gamayun & IWSLT",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | alokmatta | null | alokmatta/wav2vec2-large-xlsr-53-sw | 2 | null | transformers | 23,654 | ---
language: sw
datasets:
- ALFFA,Gamayun & IWSLT
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Swahili XLSR-53 Wav2Vec2.0 Large
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: ALFFA sw
args: sw
metrics:
- name: Test WER
type: wer
value: WIP
---
# Wav2Vec2-Large-XLSR-53-Swahili
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Swahili using the following datasets:
- [ALFFA](http://www.openslr.org/25/),
- [Gamayun](https://gamayun.translatorswb.org/download/gamayun-5k-english-swahili/)
- [IWSLT](https://iwslt.org/2021/low-resource)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained("alokmatta/wav2vec2-large-xlsr-53-sw")
model = Wav2Vec2ForCTC.from_pretrained("alokmatta/wav2vec2-large-xlsr-53-sw").to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def load_file_to_data(file):
batch = {}
speech, _ = torchaudio.load(file)
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
return batch
def predict(data):
features = processor(data["speech"], sampling_rate=data["sampling_rate"], padding=True, return_tensors="pt")
input_values = features.input_values.to("cuda")
attention_mask = features.attention_mask.to("cuda")
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
return processor.batch_decode(pred_ids)
predict(load_file_to_data('./demo.wav'))
```
**Test Result**: 40 %
## Training
The script used for training can be found [here](https://colab.research.google.com/drive/1_RL6TQv_Yiu_xbWXu4ycbzdCdXCqEQYU?usp=sharing) |
alvinkobe/DialoGPT-medium-steve_biko | 7f79c2559fe0c8828418d73b1b1dc3c5dc0c163a | 2021-09-09T03:03:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | alvinkobe | null | alvinkobe/DialoGPT-medium-steve_biko | 2 | null | transformers | 23,655 | ---
tags:
- conversational
---
# Frank Talks DialoGPT Model |
am-shb/bert-base-multilingual-uncased-finetuned | 5bc7fc8380ddbd5868ea216b02f5f97563fd990b | 2022-02-06T00:05:59.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | am-shb | null | am-shb/bert-base-multilingual-uncased-finetuned | 2 | null | transformers | 23,656 | ---
tags:
- generated_from_trainer
model-index:
- name: '57463134'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 57463134
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 16
- seed: 1337
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.11.2
- Pytorch 1.10.0
- Datasets 1.8.0
- Tokenizers 0.10.3
|
am-shb/bert-base-multilingual-uncased-pretrained | d60a23f6ee4025ee60c6c6e05bc61808d5745c5d | 2022-02-10T14:49:27.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | am-shb | null | am-shb/bert-base-multilingual-uncased-pretrained | 2 | null | transformers | 23,657 | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1337
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.11.2
- Pytorch 1.10.0
- Datasets 1.8.0
- Tokenizers 0.10.3
|
ami-wav2vec2/ami-dummy | cca00b7bce3dc086774fe25891a7be1530258247 | 2021-10-12T16:08:48.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/ami-dummy | 2 | null | transformers | 23,658 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: ami-dummy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ami-dummy
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 94.6519
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| No log | 2.46 | 15 | 102.9094 | 1.0 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-base-ami_multi-nithin3 | e1135bad5257e58dd236e4938ab0020152a55a44 | 2021-10-22T08:56:04.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-base-ami_multi-nithin3 | 2 | null | transformers | 23,659 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-base-ami_multi-nithin3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ami_multi-nithin3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9953
- Wer: 0.4577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.7412 | 1.07 | 2500 | 2.9356 | 0.9925 |
| 2.0224 | 2.13 | 5000 | 2.0951 | 0.5730 |
| 1.9017 | 3.2 | 7500 | 1.8801 | 0.5070 |
| 1.8356 | 4.27 | 10000 | 2.0530 | 0.4778 |
| 1.8002 | 5.33 | 12500 | 1.9465 | 0.4620 |
| 1.7424 | 6.4 | 15000 | 1.9561 | 0.4529 |
| 1.7406 | 7.47 | 17500 | 1.9190 | 0.4477 |
| 1.7046 | 8.53 | 20000 | 1.8138 | 0.4402 |
| 1.6784 | 9.6 | 22500 | 1.8275 | 0.4385 |
| 1.6657 | 10.67 | 25000 | 1.7603 | 0.4307 |
| 1.6618 | 11.73 | 27500 | 1.7269 | 0.4249 |
| 1.6037 | 12.8 | 30000 | 1.7071 | 0.4272 |
| 1.639 | 13.87 | 32500 | 1.6559 | 0.4234 |
| 1.614 | 14.93 | 35000 | 1.7535 | 0.4237 |
| 1.6044 | 16.0 | 37500 | 1.7945 | 0.4200 |
| 1.5685 | 17.06 | 40000 | 1.7135 | 0.4170 |
| 1.6194 | 18.13 | 42500 | 1.8712 | 0.4161 |
| 1.566 | 19.2 | 45000 | 1.8720 | 0.4176 |
| 1.5572 | 20.26 | 47500 | 1.7077 | 0.4135 |
| 1.5715 | 21.33 | 50000 | 1.7538 | 0.4143 |
| 1.5595 | 22.4 | 52500 | 1.8135 | 0.4133 |
| 1.5465 | 23.46 | 55000 | 1.8119 | 0.4134 |
| 1.5369 | 24.53 | 57500 | 1.7565 | 0.4086 |
| 1.5392 | 25.6 | 60000 | 1.7323 | 0.4101 |
| 1.5383 | 26.66 | 62500 | 1.7516 | 0.4097 |
| 1.5266 | 27.73 | 65000 | 1.7961 | 0.4104 |
| 1.525 | 28.8 | 67500 | 1.7472 | 0.4094 |
| 1.5779 | 29.86 | 70000 | 1.7600 | 0.4096 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-base-ami_multi-nithin5 | bf9b4bca9d7f0983a131603af9561a7493f46a76 | 2021-11-04T05:22:40.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-base-ami_multi-nithin5 | 2 | null | transformers | 23,660 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-base-ami_multi-nithin5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ami_multi-nithin5
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5392
- Wer: 0.4481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.8839 | 2.16 | 2500 | 1.7172 | 0.6249 |
| 1.5323 | 4.31 | 5000 | 1.4628 | 0.4930 |
| 1.4325 | 6.47 | 7500 | 1.3856 | 0.4495 |
| 1.3461 | 8.62 | 10000 | 1.3695 | 0.4350 |
| 1.3249 | 10.78 | 12500 | 1.3640 | 0.4294 |
| 1.3288 | 12.93 | 15000 | 1.3429 | 0.4220 |
| 1.2503 | 15.09 | 17500 | 1.3325 | 0.4171 |
| 1.2587 | 17.24 | 20000 | 1.3201 | 0.4108 |
| 1.2135 | 19.4 | 22500 | 1.3329 | 0.4083 |
| 1.2154 | 21.55 | 25000 | 1.3341 | 0.4057 |
| 1.2162 | 23.71 | 27500 | 1.3291 | 0.4046 |
| 1.2062 | 25.86 | 30000 | 1.3305 | 0.4031 |
| 1.1853 | 28.02 | 32500 | 1.3299 | 0.4023 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-base-ami_single-vumichien2 | cbb08f9a68af481e90faa22a4f785ad713c3ffb3 | 2021-10-23T00:46:39.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-base-ami_single-vumichien2 | 2 | null | transformers | 23,661 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-base-ami_single-vumichien2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ami_single-vumichien2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---:|
| 0.0 | 0.53 | 2500 | nan | 1.0 |
| 0.0 | 1.06 | 5000 | nan | 1.0 |
| 0.0 | 1.59 | 7500 | nan | 1.0 |
| 0.0 | 2.12 | 10000 | nan | 1.0 |
| 0.0 | 2.64 | 12500 | nan | 1.0 |
| 0.0 | 3.17 | 15000 | nan | 1.0 |
| 0.0 | 3.7 | 17500 | nan | 1.0 |
| 0.0 | 4.23 | 20000 | nan | 1.0 |
| 0.0 | 4.76 | 22500 | nan | 1.0 |
| 0.0 | 5.29 | 25000 | nan | 1.0 |
| 0.0 | 5.82 | 27500 | nan | 1.0 |
| 0.0 | 6.35 | 30000 | nan | 1.0 |
| 0.0 | 6.87 | 32500 | nan | 1.0 |
| 0.0 | 7.4 | 35000 | nan | 1.0 |
| 0.0 | 7.93 | 37500 | nan | 1.0 |
| 0.0 | 8.46 | 40000 | nan | 1.0 |
| 0.0 | 8.99 | 42500 | nan | 1.0 |
| 0.0 | 9.52 | 45000 | nan | 1.0 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.13.3
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-large-lv60-ami_multi-nithin8 | 781e9487d4027810a7622606ffe28144dc2c2013 | 2021-11-29T08:20:01.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-large-lv60-ami_multi-nithin8 | 2 | null | transformers | 23,662 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-large-lv60-ami_multi-nithin8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-lv60-ami_multi-nithin8
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4945
- Wer: 0.4291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.336 | 2.16 | 2500 | 1.2807 | 0.4097 |
| 1.216 | 4.31 | 5000 | 1.2406 | 0.3931 |
| 1.1353 | 6.47 | 7500 | 1.2145 | 0.3801 |
| 1.0674 | 8.62 | 10000 | 1.1930 | 0.3825 |
| 1.0223 | 10.78 | 12500 | 1.2283 | 0.3907 |
| 1.009 | 12.93 | 15000 | 1.2266 | 0.3810 |
| 0.8998 | 15.09 | 17500 | 1.2719 | 0.3839 |
| 0.8912 | 17.24 | 20000 | 1.2889 | 0.3867 |
| 0.8459 | 19.4 | 22500 | 1.3031 | 0.3941 |
| 0.8193 | 21.55 | 25000 | 1.3543 | 0.3862 |
| 0.8048 | 23.71 | 27500 | 1.3533 | 0.3858 |
| 0.7663 | 25.86 | 30000 | 1.3941 | 0.3993 |
| 0.7311 | 28.02 | 32500 | 1.4745 | 0.3937 |
| 0.716 | 30.17 | 35000 | 1.4788 | 0.3989 |
| 0.6868 | 32.33 | 37500 | 1.4966 | 0.3925 |
| 0.6558 | 34.48 | 40000 | 1.5457 | 0.3901 |
| 0.6473 | 36.64 | 42500 | 1.5662 | 0.3944 |
| 0.631 | 38.79 | 45000 | 1.5689 | 0.3956 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-large-lv60-ami_multi-tune_0.00005_8 | 2199af8410466e4bae712d90f991652cad2248f3 | 2021-11-18T02:36:47.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-large-lv60-ami_multi-tune_0.00005_8 | 2 | null | transformers | 23,663 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-large-lv60-ami_multi-tune_0.00005_8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-lv60-ami_multi-tune_0.00005_8
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4987
- Wer: 0.4569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.8603 | 0.86 | 1000 | 2.8296 | 1.0 |
| 1.4382 | 1.72 | 2000 | 1.4212 | 0.4776 |
| 1.3 | 2.59 | 3000 | 1.3231 | 0.4330 |
| 1.2322 | 3.45 | 4000 | 1.2824 | 0.4251 |
| 1.1741 | 4.31 | 5000 | 1.2740 | 0.4187 |
| 1.1268 | 5.17 | 6000 | 1.2600 | 0.4161 |
| 1.0911 | 6.03 | 7000 | 1.2624 | 0.4076 |
| 1.0701 | 6.9 | 8000 | 1.2607 | 0.4076 |
| 1.0426 | 7.76 | 9000 | 1.2629 | 0.4091 |
| 1.0273 | 8.62 | 10000 | 1.2596 | 0.4117 |
| 1.0294 | 9.48 | 11000 | 1.2663 | 0.4077 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-large-lv60-ami_multi-tune_0.0001_8 | 581454925f7711e643d94d7315d25b8e6ea814f3 | 2021-11-21T00:08:38.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-large-lv60-ami_multi-tune_0.0001_8 | 2 | null | transformers | 23,664 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-large-lv60-ami_multi-tune_0.0001_8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-lv60-ami_multi-tune_0.0001_8
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5037
- Wer: 0.4342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.5659 | 0.86 | 1000 | 1.4888 | 0.5013 |
| 1.2886 | 1.72 | 2000 | 1.2864 | 0.4171 |
| 1.1701 | 2.59 | 3000 | 1.2319 | 0.3958 |
| 1.108 | 3.45 | 4000 | 1.2009 | 0.4006 |
| 1.0407 | 4.31 | 5000 | 1.2137 | 0.3888 |
| 0.9785 | 5.17 | 6000 | 1.2017 | 0.3927 |
| 0.948 | 6.03 | 7000 | 1.2107 | 0.3952 |
| 0.9191 | 6.9 | 8000 | 1.2195 | 0.3867 |
| 0.8844 | 7.76 | 9000 | 1.2227 | 0.3901 |
| 0.8538 | 8.62 | 10000 | 1.2389 | 0.3968 |
| 0.854 | 9.48 | 11000 | 1.2514 | 0.3939 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
amoghsgopadi/wav2vec2-large-xlsr-kn | 8fe9e06a881196a2f6d0a4104a1d46f67d4b9cad | 2021-07-05T19:21:53.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"kn",
"dataset:openslr",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | amoghsgopadi | null | amoghsgopadi/wav2vec2-large-xlsr-kn | 2 | null | transformers | 23,665 | ---
language: kn
datasets:
- openslr
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 Kannada by Amogh Gopadi
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR kn
type: openslr
metrics:
- name: Test WER
type: wer
value: 27.08
---
# Wav2Vec2-Large-XLSR-53-Kannada
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Kannada using the [OpenSLR SLR79](http://openslr.org/79/) dataset. When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows, assuming you have a dataset with Kannada `sentence` and `path` fields:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For a sample, see the Colab link in Training Section.
processor = Wav2Vec2Processor.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
model = Wav2Vec2ForCTC.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
resampler = torchaudio.transforms.Resample(48_000, 16_000) # The original data was with 48,000 sampling rate. You can change it according to your input.
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on 10% of the Kannada data on OpenSLR.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
model = Wav2Vec2ForCTC.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\–\…]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"),
attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 27.08 %
## Training
90% of the OpenSLR Kannada dataset was used for training.
The colab notebook used for training can be found [here](https://colab.research.google.com/github/amoghgopadi/wav2vec2-xlsr-kannada/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Kannada_ASR.ipynb). |
anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-42 | 7cc0f5c499fa13860db50a57fd272a87ddd4aa83 | 2022-02-21T18:55:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-42 | 2 | null | transformers | 23,666 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-128-finetuned-squad-seed-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-42
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
{'exact_match': 12.93282876064333, 'f1': 21.98821604201723}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
anas-awadalla/bert-medium-finetuned-squad | 9097431eddefccad27b93d3e91550dd631a3c362 | 2022-01-24T01:10:28.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/bert-medium-finetuned-squad | 2 | null | transformers | 23,667 | Results:
{'exact_match': 76.82119205298014, 'f1': 84.69734248389383} |
andi611/distilbert-base-uncased-squad2-with-ner-mit-restaurant-with-neg-with-repeat | 57374c3a0d153e07225a3328fdb57560429c4e38 | 2021-08-23T05:38:50.000Z | [
"pytorch",
"distilbert",
"question-answering",
"en",
"dataset:squad_v2",
"dataset:mit_restaurant",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | question-answering | false | andi611 | null | andi611/distilbert-base-uncased-squad2-with-ner-mit-restaurant-with-neg-with-repeat | 2 | null | transformers | 23,668 | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- squad_v2
- mit_restaurant
model_index:
- name: distilbert-base-uncased-squad2-with-ner-mit-restaurant-with-neg-with-repeat
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: squad_v2
type: squad_v2
- task:
name: Token Classification
type: token-classification
dataset:
name: mit_restaurant
type: mit_restaurant
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad2-with-ner-mit-restaurant-with-neg-with-repeat
This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the squad_v2 and the mit_restaurant datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
andi611/distilbert-base-uncased-squad2-with-ner-with-neg-with-multi-with-repeat | 26a4c79a59710cc9945924619cc77d9b9ac86c89 | 2021-08-11T17:03:38.000Z | [
"pytorch",
"distilbert",
"question-answering",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | question-answering | false | andi611 | null | andi611/distilbert-base-uncased-squad2-with-ner-with-neg-with-multi-with-repeat | 2 | null | transformers | 23,669 | ---
tags:
- generated_from_trainer
datasets:
- conll2003
model_index:
- name: distilbert-base-uncased-squad2-with-ner-with-neg-with-multi-with-repeat
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: conll2003
type: conll2003
args: conll2003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad2-with-ner-with-neg-with-multi-with-repeat
This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
andi611/distilbert-base-uncased-squad2-with-ner-with-neg-with-multi | c4b0cb24bdf480899c39ae6fd492f45f8e81ed66 | 2021-07-29T03:14:48.000Z | [
"pytorch",
"distilbert",
"question-answering",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | question-answering | false | andi611 | null | andi611/distilbert-base-uncased-squad2-with-ner-with-neg-with-multi | 2 | null | transformers | 23,670 | ---
tags:
- generated_from_trainer
datasets:
- conll2003
model_index:
- name: distilbert-base-uncased-squad2-with-ner-with-neg-with-multi
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: conll2003
type: conll2003
args: conll2003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad2-with-ner-with-neg-with-multi
This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
andi611/distilbert-base-uncased-squad2-with-ner-with-neg | 04f5e6c6f04a72b277b24afecd761f0a94b2fb0d | 2021-07-27T07:50:09.000Z | [
"pytorch",
"distilbert",
"question-answering",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | question-answering | false | andi611 | null | andi611/distilbert-base-uncased-squad2-with-ner-with-neg | 2 | null | transformers | 23,671 | ---
tags:
- generated_from_trainer
datasets:
- conll2003
model_index:
- name: distilbert-base-uncased-squad2-with-ner-with-neg
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: conll2003
type: conll2003
args: conll2003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad2-with-ner-with-neg
This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
andikarachman/DialoGPT-small-sheldon | 3cd1ac2ee5de069b33e2f57d0d9b54177887341c | 2021-12-11T14:51:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | andikarachman | null | andikarachman/DialoGPT-small-sheldon | 2 | null | transformers | 23,672 | ---
tags:
- conversational
---
# My Awesome Model
|
anduush/DialoGPT-small-Rick | b161edcd584b0ef06365ec3ecaca0e97814f4594 | 2021-08-27T07:25:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | anduush | null | anduush/DialoGPT-small-Rick | 2 | null | transformers | 23,673 | ---
tags:
- conversational
---
# Rick and Morty DialoGPT Model |
angiquer/twitterko-electra-base-generator | 8ae013ba1db21970c700dd84e81bb83203d9b46a | 2020-07-10T01:44:00.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | angiquer | null | angiquer/twitterko-electra-base-generator | 2 | null | transformers | 23,674 | Entry not found |
anhtunguyen98/xlm-base-vi-en | f57b337cf56fb8cba52c1a29a626265dea95a307 | 2021-10-10T10:16:56.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | anhtunguyen98 | null | anhtunguyen98/xlm-base-vi-en | 2 | null | transformers | 23,675 | Entry not found |
aniltrkkn/wav2vec2-large-xlsr-53-turkish | 40506d7c277e7d8421f1e57ed8baee9584e901f5 | 2021-07-05T19:34:22.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | aniltrkkn | null | aniltrkkn/wav2vec2-large-xlsr-53-turkish | 2 | 0 | transformers | 23,676 | ---
language: tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2-Large-XLSR-53-Turkish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 17.46
---
# Wav2Vec2-Large-XLSR-53-Turkish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Turkish using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from unicode_tr import unicode_tr
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish")
model = Wav2Vec2ForCTC.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish")
model = Wav2Vec2ForCTC.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tbatch["sentence"] = str(unicode_tr(re.sub(chars_to_ignore_regex, "", batch["sentence"])).lower())
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\twith torch.no_grad():
\t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\tpred_ids = torch.argmax(logits, dim=-1)
\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 17.46 %
## Training
unicode_tr package is used for converting sentences to lower case since regular lower() does not work well with Turkish.
Since training data is very limited for Turkish, all data is employed with a K-Fold (k=5) training approach. Best model out of the 5 trainings is uploaded. Training arguments:
--num_train_epochs="30" \\
--per_device_train_batch_size="32" \\
--evaluation_strategy="steps" \\
--activation_dropout="0.055" \\
--attention_dropout="0.094" \\
--feat_proj_dropout="0.04" \\
--hidden_dropout="0.047" \\
--layerdrop="0.041" \\
--learning_rate="2.34e-4" \\
--mask_time_prob="0.082" \\
--warmup_steps="250" \\
All trainings took ~20 hours with a GeForce RTX 3090 Graphics Card. |
anjulRajendraSharma/wavlm-base-libri-clean-100 | ab5465eea0c0f1579435e1ec50576054d4277576 | 2022-01-28T16:52:47.000Z | [
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"transformers",
"librispeech_asr",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | anjulRajendraSharma | null | anjulRajendraSharma/wavlm-base-libri-clean-100 | 2 | null | transformers | 23,677 | ---
tags:
- automatic-speech-recognition
- librispeech_asr
- generated_from_trainer
model-index:
- name: wavlm-libri-clean-100h-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-libri-clean-100h-base
This model is a fine-tuned version of [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) on the LIBRISPEECH_ASR - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0955
- Wer: 0.0773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.8664 | 0.17 | 300 | 2.8439 | 1.0 |
| 0.5009 | 0.34 | 600 | 0.2709 | 0.2162 |
| 0.2056 | 0.5 | 900 | 0.1934 | 0.1602 |
| 0.1648 | 0.67 | 1200 | 0.1576 | 0.1306 |
| 0.1922 | 0.84 | 1500 | 0.1358 | 0.1114 |
| 0.093 | 1.01 | 1800 | 0.1277 | 0.1035 |
| 0.0652 | 1.18 | 2100 | 0.1251 | 0.1005 |
| 0.0848 | 1.35 | 2400 | 0.1188 | 0.0964 |
| 0.0706 | 1.51 | 2700 | 0.1091 | 0.0905 |
| 0.0846 | 1.68 | 3000 | 0.1018 | 0.0840 |
| 0.0684 | 1.85 | 3300 | 0.0978 | 0.0809 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.0
- Tokenizers 0.10.3
|
anton-l/wav2vec2-large-xlsr-53-chuvash | cd5a3410b2d21900037043d333888a63b0cdabd3 | 2021-07-05T19:40:17.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"cv",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anton-l | null | anton-l/wav2vec2-large-xlsr-53-chuvash | 2 | null | transformers | 23,678 | ---
language: cv
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Chuvash XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice cv
type: common_voice
args: cv
metrics:
- name: Test WER
type: wer
value: 40.01
---
# Wav2Vec2-Large-XLSR-53-Chuvash
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Chuvash using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "cv", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-chuvash")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-chuvash")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Chuvash test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/cv.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-chuvash")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-chuvash")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/cv/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/cv/clips/"
def clean_sentence(sent):
sent = sent.lower()
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 40.01 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found [here](github.com)
|
anton-l/wav2vec2-large-xlsr-53-estonian | 71f1af393d576d4eedd4654e1edf27f3c0426609 | 2021-07-05T19:44:33.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"et",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anton-l | null | anton-l/wav2vec2-large-xlsr-53-estonian | 2 | null | transformers | 23,679 | ---
language: et
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Estonian XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice et
type: common_voice
args: et
metrics:
- name: Test WER
type: wer
value: 30.74
---
# Wav2Vec2-Large-XLSR-53-Estonian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Estonian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "et", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-estonian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-estonian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Estonian test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/et.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-estonian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-estonian")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/et/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/et/clips/"
def clean_sentence(sent):
sent = sent.lower()
# normalize apostrophes
sent = sent.replace("’", "'")
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() or ch == "'" else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 30.74 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found [here](github.com)
|
anton-l/wav2vec2-large-xlsr-53-latvian | 74621107cd9fd6661849cf052da6db1636166858 | 2021-07-05T20:00:29.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"lv",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anton-l | null | anton-l/wav2vec2-large-xlsr-53-latvian | 2 | null | transformers | 23,680 | ---
language: lv
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Latvian XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice lv
type: common_voice
args: lv
metrics:
- name: Test WER
type: wer
value: 26.89
---
# Wav2Vec2-Large-XLSR-53-Latvian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Latvian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lv", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-latvian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-latvian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Latvian test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/lv.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-latvian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-latvian")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/lv/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/lv/clips/"
def clean_sentence(sent):
sent = sent.lower()
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 26.89 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
anuragshas/wav2vec2-large-xls-r-300m-as | 1e288941bc533a80ef3df7eafc203eccc653beb7 | 2022-03-23T18:32:45.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"as",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-large-xls-r-300m-as | 2 | 1 | transformers | 23,681 | ---
language:
- as
license: apache-2.0
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-as
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_7_0
name: Common Voice 7
args: as
metrics:
- type: wer
value: 56.995
name: Test WER
- name: Test CER
type: cer
value: 20.39
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-as
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9068
- Wer: 0.6679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.12
- num_epochs: 240
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 5.7027 | 21.05 | 400 | 3.4157 | 1.0 |
| 1.1638 | 42.1 | 800 | 1.3498 | 0.7461 |
| 0.2266 | 63.15 | 1200 | 1.6147 | 0.7273 |
| 0.1473 | 84.21 | 1600 | 1.6649 | 0.7108 |
| 0.1043 | 105.26 | 2000 | 1.7691 | 0.7090 |
| 0.0779 | 126.31 | 2400 | 1.8300 | 0.7009 |
| 0.0613 | 147.36 | 2800 | 1.8681 | 0.6916 |
| 0.0471 | 168.41 | 3200 | 1.8567 | 0.6875 |
| 0.0343 | 189.46 | 3600 | 1.9054 | 0.6840 |
| 0.0265 | 210.51 | 4000 | 1.9020 | 0.6786 |
| 0.0219 | 231.56 | 4400 | 1.9068 | 0.6679 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-as --dataset mozilla-foundation/common_voice_7_0 --config as --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-as"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "as", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "জাহাজত তো তিশকুৰলৈ যাব কিন্তু জহাজিটো আহিপনে"
```
### Eval results on Common Voice 7 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 67 | 56.995 |
|
anuragshas/wav2vec2-large-xlsr-53-ia | 02a426bcbf0fdc935bf523cb9cd22b821447112b | 2021-07-05T21:04:27.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ia",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-large-xlsr-53-ia | 2 | null | transformers | 23,682 | ---
language: ia
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Anurag Singh XLSR Wav2Vec2 Large 53 Interlingua
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ia
type: common_voice
args: ia
metrics:
- name: Test WER
type: wer
value: 22.08
---
# Wav2Vec2-Large-XLSR-53-Interlingua
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Interlingua using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ia", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-ia")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-ia")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Interlingua test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ia", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-ia")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-ia")
model.to("cuda")
chars_to_ignore_regex = '[\.\,\!\?\-\"\:\;\'\“\”]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 22.08 %
## Training
The Common Voice `train` and `validation` datasets were used for training. |
anuragshas/wav2vec2-large-xlsr-53-vietnamese | ae41602462f0789331ad20dc825805df42e79852 | 2021-07-05T21:37:41.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"vi",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-large-xlsr-53-vietnamese | 2 | null | transformers | 23,683 | ---
language: vi
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Anurag Singh XLSR Wav2Vec2 Large 53 Vietnamese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice vi
type: common_voice
args: vi
metrics:
- name: Test WER
type: wer
value: 66.78
---
# Wav2Vec2-Large-XLSR-53-Vietnamese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Vietnamese using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "vi", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-vietnamese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Vietnamese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "vi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-vietnamese")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 66.78 %
## Training
The Common Voice `train` and `validation` datasets were used for training. |
anuragshas/wav2vec2-xls-r-1b-hi-cv8 | 29e985f94f926e3b2e2967f5b69be5c78aeddaeb | 2022-01-30T15:20:16.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-xls-r-1b-hi-cv8 | 2 | null | transformers | 23,684 | ---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6780
- Wer: 0.3670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.514 | 2.07 | 400 | 1.4589 | 0.8531 |
| 1.4289 | 4.15 | 800 | 0.8940 | 0.6475 |
| 1.276 | 6.22 | 1200 | 0.7743 | 0.6089 |
| 1.2213 | 8.29 | 1600 | 0.6919 | 0.4973 |
| 1.1522 | 10.36 | 2000 | 0.6635 | 0.4588 |
| 1.0914 | 12.44 | 2400 | 0.6839 | 0.4586 |
| 1.0499 | 14.51 | 2800 | 0.7151 | 0.4467 |
| 1.0238 | 16.58 | 3200 | 0.6824 | 0.4436 |
| 0.9963 | 18.65 | 3600 | 0.6872 | 0.4437 |
| 0.9728 | 20.73 | 4000 | 0.7047 | 0.4244 |
| 0.9373 | 22.8 | 4400 | 0.6569 | 0.4189 |
| 0.9028 | 24.87 | 4800 | 0.6623 | 0.4094 |
| 0.8759 | 26.94 | 5200 | 0.6723 | 0.4152 |
| 0.8824 | 29.02 | 5600 | 0.6467 | 0.4017 |
| 0.8371 | 31.09 | 6000 | 0.6911 | 0.4080 |
| 0.8205 | 33.16 | 6400 | 0.7145 | 0.4063 |
| 0.7837 | 35.23 | 6800 | 0.7037 | 0.3930 |
| 0.7708 | 37.31 | 7200 | 0.6925 | 0.3840 |
| 0.7359 | 39.38 | 7600 | 0.7034 | 0.3829 |
| 0.7153 | 41.45 | 8000 | 0.7030 | 0.3794 |
| 0.7127 | 43.52 | 8400 | 0.6823 | 0.3761 |
| 0.6884 | 45.6 | 8800 | 0.6854 | 0.3711 |
| 0.6835 | 47.67 | 9200 | 0.6723 | 0.3665 |
| 0.6703 | 49.74 | 9600 | 0.6773 | 0.3668 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
anuragshas/wav2vec2-xls-r-300m-mr-cv8-with-lm | b5c4a33c89d120476632e71a31f6434dc91fcfff | 2022-02-06T16:11:16.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mr",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-xls-r-300m-mr-cv8-with-lm | 2 | null | transformers | 23,685 | ---
language:
- mr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6693
- Wer: 0.5921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 500.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 4.9504 | 18.18 | 400 | 4.6730 | 1.0 |
| 3.3766 | 36.36 | 800 | 3.3464 | 1.0 |
| 3.1128 | 54.55 | 1200 | 3.0177 | 0.9980 |
| 1.7966 | 72.73 | 1600 | 0.8733 | 0.8039 |
| 1.4085 | 90.91 | 2000 | 0.5555 | 0.6458 |
| 1.1731 | 109.09 | 2400 | 0.4930 | 0.6438 |
| 1.0271 | 127.27 | 2800 | 0.4780 | 0.6093 |
| 0.9045 | 145.45 | 3200 | 0.4647 | 0.6578 |
| 0.807 | 163.64 | 3600 | 0.4505 | 0.5925 |
| 0.741 | 181.82 | 4000 | 0.4746 | 0.6025 |
| 0.6706 | 200.0 | 4400 | 0.5004 | 0.5844 |
| 0.6186 | 218.18 | 4800 | 0.4984 | 0.5997 |
| 0.5508 | 236.36 | 5200 | 0.5298 | 0.5636 |
| 0.5123 | 254.55 | 5600 | 0.5410 | 0.5110 |
| 0.4623 | 272.73 | 6000 | 0.5591 | 0.5383 |
| 0.4281 | 290.91 | 6400 | 0.5775 | 0.5600 |
| 0.4045 | 309.09 | 6800 | 0.5924 | 0.5580 |
| 0.3651 | 327.27 | 7200 | 0.5671 | 0.5684 |
| 0.343 | 345.45 | 7600 | 0.6083 | 0.5945 |
| 0.3085 | 363.64 | 8000 | 0.6243 | 0.5728 |
| 0.2941 | 381.82 | 8400 | 0.6245 | 0.5580 |
| 0.2735 | 400.0 | 8800 | 0.6458 | 0.5804 |
| 0.262 | 418.18 | 9200 | 0.6566 | 0.5824 |
| 0.2578 | 436.36 | 9600 | 0.6558 | 0.5965 |
| 0.2388 | 454.55 | 10000 | 0.6598 | 0.5993 |
| 0.2328 | 472.73 | 10400 | 0.6700 | 0.6041 |
| 0.2286 | 490.91 | 10800 | 0.6684 | 0.5957 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
anuragshas/wav2vec2-xls-r-300m-pa-IN-cv8-with-lm | ba0a3d8a4a176ddf028a4d0a356141b6c673dd45 | 2022-02-03T12:28:34.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pa-IN",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-xls-r-300m-pa-IN-cv8-with-lm | 2 | null | transformers | 23,686 | ---
language:
- pa-IN
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6864
- Wer: 0.6707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 4.3322 | 14.81 | 400 | 3.7450 | 1.0 |
| 3.2662 | 29.63 | 800 | 3.2571 | 0.9996 |
| 1.6408 | 44.44 | 1200 | 0.9098 | 0.8162 |
| 1.2289 | 59.26 | 1600 | 0.6757 | 0.7099 |
| 1.0551 | 74.07 | 2000 | 0.6417 | 0.7044 |
| 0.966 | 88.89 | 2400 | 0.6365 | 0.6789 |
| 0.8713 | 103.7 | 2800 | 0.6617 | 0.6954 |
| 0.8055 | 118.52 | 3200 | 0.6371 | 0.6762 |
| 0.7489 | 133.33 | 3600 | 0.6798 | 0.6911 |
| 0.7073 | 148.15 | 4000 | 0.6567 | 0.6731 |
| 0.6609 | 162.96 | 4400 | 0.6742 | 0.6840 |
| 0.6435 | 177.78 | 4800 | 0.6862 | 0.6633 |
| 0.6282 | 192.59 | 5200 | 0.6865 | 0.6731 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
anuragshas/wav2vec2-xls-r-300m-ta-cv8 | 49bd874f83ce638061af7841cf8cef92470bb9d9 | 2022-01-31T05:05:09.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-xls-r-300m-ta-cv8 | 2 | null | transformers | 23,687 | Entry not found |
anuragshas/wav2vec2-xlsr-53-rm-vallader-with-lm | 2f164ad5aad44a75eface5536215bddefd98d74b | 2022-01-26T16:38:21.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-xlsr-53-rm-vallader-with-lm | 2 | null | transformers | 23,688 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xlsr-53-rm-vallader-with-lm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-53-rm-vallader-with-lm
This model is a fine-tuned version of [anuragshas/wav2vec2-large-xlsr-53-rm-vallader](https://huggingface.co/anuragshas/wav2vec2-large-xlsr-53-rm-vallader) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4552
- Wer: 0.3206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.112
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2379 | 3.12 | 100 | 0.4041 | 0.3396 |
| 0.103 | 6.25 | 200 | 0.4400 | 0.3337 |
| 0.0664 | 9.38 | 300 | 0.4239 | 0.3315 |
| 0.0578 | 12.5 | 400 | 0.4303 | 0.3267 |
| 0.0446 | 15.62 | 500 | 0.4575 | 0.3274 |
| 0.041 | 18.75 | 600 | 0.4451 | 0.3223 |
| 0.0402 | 21.88 | 700 | 0.4507 | 0.3206 |
| 0.0374 | 25.0 | 800 | 0.4649 | 0.3208 |
| 0.0371 | 28.12 | 900 | 0.4552 | 0.3206 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
any0019/text_style_mlm_negative | c14b1afa75ebf5e1daeba4bce8da237abd89269e | 2021-12-14T13:35:28.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | any0019 | null | any0019/text_style_mlm_negative | 2 | null | transformers | 23,689 | Entry not found |
any0019/text_style_mlm_positive | 6ce89b12e724a952e16077bb99505f2079b97dc1 | 2021-12-14T13:33:17.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | any0019 | null | any0019/text_style_mlm_positive | 2 | null | transformers | 23,690 | Entry not found |
aodiniz/bert_uncased_L-2_H-128_A-2_cord19-200616 | 37acd8afeb489002683e52713a0946e0d426970b | 2021-05-18T23:47:28.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | aodiniz | null | aodiniz/bert_uncased_L-2_H-128_A-2_cord19-200616 | 2 | null | transformers | 23,691 | Entry not found |
aodiniz/bert_uncased_L-2_H-128_A-2_cord19-200616_squad2 | 21d86fb48424d6cc038efed96582058f7f4e95fe | 2021-05-18T23:47:46.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-2_H-128_A-2_cord19-200616_squad2 | 2 | null | transformers | 23,692 | Entry not found |
aodiniz/bert_uncased_L-2_H-128_A-2_cord19-200616_squad2_covid-qna | 5cab273aeabca3366902bdb89d16ff49840240db | 2021-05-18T23:48:03.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-2_H-128_A-2_cord19-200616_squad2_covid-qna | 2 | null | transformers | 23,693 | Entry not found |
aodiniz/bert_uncased_L-2_H-128_A-2_squad2 | 6bd8aabce03a51e70c3db6cde5453025c6dc7fa7 | 2021-05-18T23:48:20.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-2_H-128_A-2_squad2 | 2 | null | transformers | 23,694 | Entry not found |
aodiniz/bert_uncased_L-4_H-512_A-8_squad2 | f84ef949edc4bedcfb98db41ffb1c3ac43a373c0 | 2021-05-18T23:54:35.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-4_H-512_A-8_squad2 | 2 | null | transformers | 23,695 | Entry not found |
aodiniz/bert_uncased_L-6_H-128_A-2_squad2 | 4d9a1c5e8044ec59f89b2e3f62989f5711ceecc2 | 2021-05-18T23:59:48.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-6_H-128_A-2_squad2 | 2 | null | transformers | 23,696 | Entry not found |
aorona/dickens | 983601700a8290ea112074c1bbbc1bfa62f24f38 | 2021-08-03T19:39:45.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | aorona | null | aorona/dickens | 2 | null | transformers | 23,697 | Entry not found |
arampacha/wav2vec2-xls-r-300m-hy-cv | 4a79394c375e008eda5234faa2519942a380c1dd | 2022-02-16T19:45:37.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"hy-AM",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hy",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | arampacha | null | arampacha/wav2vec2-xls-r-300m-hy-cv | 2 | null | transformers | 23,698 | ---
language:
- hy-AM
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- hy
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HY-AM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5891
- Wer: 0.6569
**Note**: If you aim for best performance use [this model](https://huggingface.co/arampacha/wav2vec2-xls-r-300m-hy). It is trained using noizy student procedure and achieves considerably better results.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 9.167 | 16.67 | 100 | 3.5599 | 1.0 |
| 3.2645 | 33.33 | 200 | 3.1771 | 1.0 |
| 3.1509 | 50.0 | 300 | 3.1321 | 1.0 |
| 3.0757 | 66.67 | 400 | 2.8594 | 1.0 |
| 2.5274 | 83.33 | 500 | 1.5286 | 0.9797 |
| 1.6826 | 100.0 | 600 | 0.8058 | 0.7974 |
| 1.2868 | 116.67 | 700 | 0.6713 | 0.7279 |
| 1.1262 | 133.33 | 800 | 0.6308 | 0.7034 |
| 1.0408 | 150.0 | 900 | 0.6056 | 0.6745 |
| 0.9617 | 166.67 | 1000 | 0.5891 | 0.6569 |
| 0.9196 | 183.33 | 1100 | 0.5913 | 0.6432 |
| 0.8853 | 200.0 | 1200 | 0.5924 | 0.6347 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
aretw0/t5-small-finetuned-en-to-ro-dataset_20-input_64 | 32ab30933e822d69020e702ec3ca3505a4f507bc | 2021-12-03T00:53:06.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | aretw0 | null | aretw0/t5-small-finetuned-en-to-ro-dataset_20-input_64 | 2 | null | transformers | 23,699 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-dataset_20-input_64
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 8.6652
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-dataset_20-input_64
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4335
- Bleu: 8.6652
- Gen Len: 18.2596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6351 | 1.0 | 7629 | 1.4335 | 8.6652 | 18.2596 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.