modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ricardo-filho/sbertimbau-base-quora-multitask | 7e221ad52d85bf9e30da1dc568e585967e31fc2c | 2021-08-17T10:20:30.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ricardo-filho | null | ricardo-filho/sbertimbau-base-quora-multitask | 1 | null | sentence-transformers | 30,200 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3227 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4333 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
richiellei/DialoGPT-small-rick | 9ca65d88dbc656c7f1fff6a8b0f4bba658b693f7 | 2022-01-17T18:48:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | richiellei | null | richiellei/DialoGPT-small-rick | 1 | null | transformers | 30,201 | ---
tags:
- conversational
---
# Rick DialoGPT Model |
ridwanpratama/DialoGPT-small-misaki | 54477dc7564b9efb80e0dbf286db78139e78be46 | 2021-09-19T15:22:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ridwanpratama | null | ridwanpratama/DialoGPT-small-misaki | 1 | null | transformers | 30,202 | ---
tags:
- conversational
---
# Misaki Ayuzawa Model |
rifkat/pubchem_1M | 89e2ba6fccb0871c0e8e917be68bc241c731af58 | 2021-07-23T11:42:19.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | rifkat | null | rifkat/pubchem_1M | 1 | null | transformers | 30,203 | Ushbu model, HuggingFace-da RoBERTa transformatorini amalga oshirishga asoslangan. Bizning RoBERTa dasturimiz 12 ta diqqat boshi va 6 ta qatlamdan foydalanadi, natijada 72 ta aniq e'tibor mexanizmlari paydo bo'ladi. Biz har bir kirish satridagi tokenlarning 15 foizini niqoblaydigan RoBERTa-dan dastlabki tekshirish protsedurasini qabul qildik. Biz maksimal 52K tokenli lug'atdan va maksimal 512 ta ketma-ketlik uzunligidan foydalanganmiz. Biz 1M PubChem to'plamlarida 10 ta davr uchun o'qitdik. Loss funksiya 2.9 dan 0.33 gacha tushdi. Ushbu modelni sizga taqdim qilamiz. |
rifkat/uztext_568Mb_Roberta_BPE | fe72ec3c6d77ecda296d88dea39b81320236c0b8 | 2021-10-18T05:32:18.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | rifkat | null | rifkat/uztext_568Mb_Roberta_BPE | 1 | null | transformers | 30,204 | <p><b>UzRoBerta model.</b>
Pre-prepared model in Uzbek (Cyrillic script) to model the masked language and predict the next sentences.
<p><b>Training data.</b>
UzBERT model was pretrained on ≈167K news articles (≈568Mb).
|
ringabelle/bert-base-cased-finetuned-COVID-tweets | f28bef2c9bc6c5e43fea1f9ce71c80da066b5333 | 2021-10-19T11:38:14.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | ringabelle | null | ringabelle/bert-base-cased-finetuned-COVID-tweets | 1 | null | transformers | 30,205 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-finetuned-COVID-tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-COVID-tweets
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2694
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 194 | 2.4419 |
| No log | 2.0 | 388 | 2.4230 |
| 2.5821 | 3.0 | 582 | 2.3678 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
riteshsinha/distilgpt2-fine-tuned-001 | 1bbfe60ef1de3052f3fcd57e2169b751ea45cd12 | 2021-05-23T12:16:18.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | riteshsinha | null | riteshsinha/distilgpt2-fine-tuned-001 | 1 | null | transformers | 30,206 | Entry not found |
rjrohit/wav2vec2-base-rj-try-4 | a3937be443ef37351088ff8921b90d78a5ea1585 | 2022-02-07T09:34:45.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | rjrohit | null | rjrohit/wav2vec2-base-rj-try-4 | 1 | null | transformers | 30,207 | Entry not found |
rkmt/wav2vec2-base-timit-demo-colab | 6fcbae8be8ebe23674770d7d40cb3c5ac411b755 | 2021-12-30T00:39:31.000Z | [
"pytorch",
"tensorboard",
"hubert",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | rkmt | null | rkmt/wav2vec2-base-timit-demo-colab | 1 | null | transformers | 30,208 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/hubert-large-ls960-ft](https://huggingface.co/facebook/hubert-large-ls960-ft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0280
- Wer: 0.0082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1152 | 1.42 | 500 | 0.0416 | 0.0159 |
| 0.0803 | 2.83 | 1000 | 0.0372 | 0.0144 |
| 0.0672 | 4.25 | 1500 | 0.0345 | 0.0119 |
| 0.0564 | 5.67 | 2000 | 0.0338 | 0.0106 |
| 0.0513 | 7.08 | 2500 | 0.0307 | 0.0100 |
| 0.0448 | 8.5 | 3000 | 0.0343 | 0.0098 |
| 0.0374 | 9.92 | 3500 | 0.0300 | 0.0084 |
| 0.0368 | 11.33 | 4000 | 0.0314 | 0.0086 |
| 0.0388 | 12.75 | 4500 | 0.0283 | 0.0089 |
| 0.0277 | 14.16 | 5000 | 0.0302 | 0.0089 |
| 0.0298 | 15.58 | 5500 | 0.0298 | 0.0089 |
| 0.0271 | 17.0 | 6000 | 0.0320 | 0.0098 |
| 0.024 | 18.41 | 6500 | 0.0286 | 0.0088 |
| 0.0236 | 19.83 | 7000 | 0.0284 | 0.0084 |
| 0.0238 | 21.25 | 7500 | 0.0290 | 0.0086 |
| 0.0227 | 22.66 | 8000 | 0.0284 | 0.0093 |
| 0.0198 | 24.08 | 8500 | 0.0280 | 0.0088 |
| 0.0225 | 25.5 | 9000 | 0.0281 | 0.0086 |
| 0.018 | 26.91 | 9500 | 0.0280 | 0.0082 |
| 0.0178 | 28.33 | 10000 | 0.0280 | 0.0082 |
| 0.0209 | 29.75 | 10500 | 0.0280 | 0.0082 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
roivian/manningLp | 97926e8c8632c122be45439fdb80456e2b16352f | 2021-10-24T00:36:44.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | roivian | null | roivian/manningLp | 1 | null | transformers | 30,209 | Entry not found |
ronanki/ml_mpnet_768_MNR | 002e2b581e4fa271e801405b0bec5d4e8b93cf11 | 2022-02-22T18:16:43.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ronanki | null | ronanki/ml_mpnet_768_MNR | 1 | null | sentence-transformers | 30,210 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ronanki/ml_mpnet_768_MNR
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ronanki/ml_mpnet_768_MNR')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ronanki/ml_mpnet_768_MNR')
model = AutoModel.from_pretrained('ronanki/ml_mpnet_768_MNR')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ronanki/ml_mpnet_768_MNR)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 29 with parameters:
```
{'batch_size': 32}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
rossanez/t5-small-finetuned-de-en-256-epochs2 | 78e0dcfc21b1500f75f105738c119940410e7dd0 | 2021-12-01T01:08:03.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt14",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | rossanez | null | rossanez/t5-small-finetuned-de-en-256-epochs2 | 1 | null | transformers | 30,211 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-en-256-epochs2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt14
type: wmt14
args: de-en
metrics:
- name: Bleu
type: bleu
value: 7.8579
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-256-epochs2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1073
- Bleu: 7.8579
- Gen Len: 17.3896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.1179 | 7.8498 | 17.382 |
| No log | 2.0 | 376 | 2.1073 | 7.8579 | 17.3896 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-256 | 033cd7d3512409b5f43ee9a9d5b1f633fbedd318 | 2021-12-01T11:08:44.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt14",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | rossanez | null | rossanez/t5-small-finetuned-de-en-256 | 1 | null | transformers | 30,212 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
model-index:
- name: t5-small-finetuned-de-en-256
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-256
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.2663 | 4.5343 | 17.698 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-64 | a70b7324f1211ddb990dc66c5e441af875586dec | 2021-12-01T11:02:01.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt14",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | rossanez | null | rossanez/t5-small-finetuned-de-en-64 | 1 | null | transformers | 30,213 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
model-index:
- name: t5-small-finetuned-de-en-64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-64
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.3808 | 3.1482 | 17.8019 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-batch8 | 799dc17fd1e0955aab6af063d31659a74fd9e912 | 2021-12-04T14:31:59.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt14",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | rossanez | null | rossanez/t5-small-finetuned-de-en-batch8 | 1 | null | transformers | 30,214 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-en-batch8
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt14
type: wmt14
args: de-en
metrics:
- name: Bleu
type: bleu
value: 10.039
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-batch8
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1282
- Bleu: 10.039
- Gen Len: 17.3839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 375 | 2.0912 | 9.9147 | 17.3084 |
| 1.5593 | 2.0 | 750 | 2.0858 | 9.9386 | 17.4299 |
| 1.4383 | 3.0 | 1125 | 2.1137 | 9.9804 | 17.34 |
| 1.3562 | 4.0 | 1500 | 2.1198 | 9.9685 | 17.367 |
| 1.3562 | 5.0 | 1875 | 2.1282 | 10.039 | 17.3839 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-epochs5 | 54a49201943cd21eba9094fc1380cdbf75452d7f | 2021-12-04T12:47:11.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt14",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | rossanez | null | rossanez/t5-small-finetuned-de-en-epochs5 | 1 | null | transformers | 30,215 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-en-epochs5
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt14
type: wmt14
args: de-en
metrics:
- name: Bleu
type: bleu
value: 5.8913
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-epochs5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2040
- Bleu: 5.8913
- Gen Len: 17.5408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.3366 | 2.8075 | 17.8188 |
| No log | 2.0 | 376 | 2.2557 | 4.8765 | 17.626 |
| 2.6928 | 3.0 | 564 | 2.2246 | 5.5454 | 17.5534 |
| 2.6928 | 4.0 | 752 | 2.2086 | 5.8511 | 17.5461 |
| 2.6928 | 5.0 | 940 | 2.2040 | 5.8913 | 17.5408 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-final | f4cf1256d3c9d6d91ed95bb22a67ec757baa362e | 2021-12-04T14:59:44.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt14",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | rossanez | null | rossanez/t5-small-finetuned-de-en-final | 1 | null | transformers | 30,216 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-en-final
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt14
type: wmt14
args: de-en
metrics:
- name: Bleu
type: bleu
value: 9.8394
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-final
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3285
- Bleu: 9.8394
- Gen Len: 17.325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.3867 | 9.7928 | 17.2581 |
| No log | 2.0 | 376 | 2.3942 | 9.7222 | 17.4186 |
| 0.7948 | 3.0 | 564 | 2.3909 | 9.6495 | 17.3513 |
| 0.7948 | 4.0 | 752 | 2.3496 | 9.7376 | 17.3417 |
| 0.7948 | 5.0 | 940 | 2.3285 | 9.8394 | 17.325 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-lr2e-4 | 1d56f514e487cfbfd56f893ecdda8555fd9effa2 | 2021-12-04T13:15:11.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt14",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | rossanez | null | rossanez/t5-small-finetuned-de-en-lr2e-4 | 1 | null | transformers | 30,217 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-en-lr2e-4
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt14
type: wmt14
args: de-en
metrics:
- name: Bleu
type: bleu
value: 9.12
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-lr2e-4
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0115
- Bleu: 9.12
- Gen Len: 17.4026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.0701 | 8.1225 | 17.4542 |
| No log | 2.0 | 376 | 2.0316 | 8.5741 | 17.4229 |
| 2.2224 | 3.0 | 564 | 2.0229 | 8.9227 | 17.3703 |
| 2.2224 | 4.0 | 752 | 2.0105 | 9.0764 | 17.4053 |
| 2.2224 | 5.0 | 940 | 2.0115 | 9.12 | 17.4026 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-wd-01 | 334a9d8bc5fd052478932088f5d357772e450fb2 | 2021-12-04T13:43:20.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt14",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | rossanez | null | rossanez/t5-small-finetuned-de-en-wd-01 | 1 | null | transformers | 30,218 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-en-wd-01
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt14
type: wmt14
args: de-en
metrics:
- name: Bleu
type: bleu
value: 9.6027
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-wd-01
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0482
- Bleu: 9.6027
- Gen Len: 17.3776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.0502 | 9.3675 | 17.3983 |
| No log | 2.0 | 376 | 2.0590 | 9.4393 | 17.3869 |
| 1.6509 | 3.0 | 564 | 2.0639 | 9.3886 | 17.3806 |
| 1.6509 | 4.0 | 752 | 2.0498 | 9.5802 | 17.3846 |
| 1.6509 | 5.0 | 940 | 2.0482 | 9.6027 | 17.3776 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rpowalski/layoutlm-base-qa | 90cae1b6785c3c99ee863484f35151e100eb3762 | 2021-06-17T09:44:07.000Z | [
"pytorch"
] | null | false | rpowalski | null | rpowalski/layoutlm-base-qa | 1 | null | null | 30,219 | Entry not found |
rsvp-AI-ca/bert-uncased-base-50k | 08fd47e6c69e3e15a4b3edefe79055ce8e362823 | 2020-12-13T03:01:46.000Z | [
"pytorch",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | rsvp-AI-ca | null | rsvp-AI-ca/bert-uncased-base-50k | 1 | null | transformers | 30,220 | Entry not found |
rsvp-AI-ca/segabert-large | adc4283be00ae6fb56610a23d310b7959f1fc856 | 2021-05-20T04:34:27.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | rsvp-AI-ca | null | rsvp-AI-ca/segabert-large | 1 | null | transformers | 30,221 | Entry not found |
rtoguchi/t5-small-finetuned-en-to-ro-weight_decay_0.001 | 074d838571d26ad65204f36110cffaa0cfec109e | 2021-12-02T17:46:55.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | rtoguchi | null | rtoguchi/t5-small-finetuned-en-to-ro-weight_decay_0.001 | 1 | null | transformers | 30,222 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-weight_decay_0.001
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.3524
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-weight_decay_0.001
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4509
- Bleu: 7.3524
- Gen Len: 18.2581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6488 | 1.0 | 7629 | 1.4509 | 7.3524 | 18.2581 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ruiqi-zhong/verifier11b | 7bbeb4899016bcdd55461251bbc98fec5576fd98 | 2022-01-27T23:19:15.000Z | [
"pytorch",
"t5",
"feature-extraction",
"transformers"
] | feature-extraction | false | ruiqi-zhong | null | ruiqi-zhong/verifier11b | 1 | null | transformers | 30,223 | Entry not found |
ruishan-lin/investopedia-QnA | f5d3e5cc67471f480c195497ccb5cfeaf2e80d63 | 2021-01-09T00:22:09.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ruishan-lin | null | ruishan-lin/investopedia-QnA | 1 | null | transformers | 30,224 | ---hello
|
ruriko/konoaqua | b5641bcd2ee96a73ba8479bf0493181c94c9bea6 | 2021-10-10T15:12:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ruriko | null | ruriko/konoaqua | 1 | null | transformers | 30,225 | ---
tags:
- conversational
---
#hope it works |
russab0/distilbert-qa | b54e28314538cfb598144e50936291f945a99ae1 | 2021-04-27T16:27:50.000Z | [
"pytorch",
"distilbert",
"multiple-choice",
"english",
"dataset:race",
"transformers",
"license:mit"
] | multiple-choice | false | russab0 | null | russab0/distilbert-qa | 1 | null | transformers | 30,226 | ---
language: "english"
license: "mit"
datasets:
- race
metrics:
- accuracy
---
# MCQ with Distilbert |
rwightman/test_model_rnv250 | e1fe21ba65825e29ddcde1edc6b0abb9ee80c2a4 | 2021-11-24T00:49:15.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | rwightman | null | rwightman/test_model_rnv250 | 1 | null | timm | 30,227 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for test_model_rnv250 |
rwightman/test_model_rnv250b | 62a161cef93a9c96be887fd573250ab005c6685f | 2021-11-24T00:52:34.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | rwightman | null | rwightman/test_model_rnv250b | 1 | null | timm | 30,228 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for test_model_rnv250b |
ryo0634/xlm-roberta-base-with-extra-training | 5e03ef79a1ee41ecd6e761a0eab4b3702a337820 | 2022-01-12T13:53:25.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | ryo0634 | null | ryo0634/xlm-roberta-base-with-extra-training | 1 | null | transformers | 30,229 | Entry not found |
s3h/opus-mt-ar-en-finetuned-src-to-trg-testing | b761c05d1b10afe37a30856259bb128250298829 | 2021-12-22T20:20:22.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | s3h | null | s3h/opus-mt-ar-en-finetuned-src-to-trg-testing | 1 | null | transformers | 30,230 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-ar-en-finetuned-src-to-trg-testing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ar-en-finetuned-src-to-trg-testing
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3973
- Bleu: 0.1939
- Gen Len: 37.6364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Apex, opt level O1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 5 | 3.4353 | 0.1994 | 36.6364 |
| No log | 2.0 | 10 | 3.4015 | 0.1994 | 36.0909 |
| No log | 3.0 | 15 | 3.3973 | 0.1939 | 37.6364 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.5.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
saattrupdan/icebert-texas-squad-is | e0446a8e2ed9a28c328cfffc8202cf89b8d56c49 | 2022-02-01T13:15:57.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | saattrupdan | null | saattrupdan/icebert-texas-squad-is | 1 | null | transformers | 30,231 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: icebert-texas-squad-is
results: []
widget:
- text: "Hvenær var Halldór Laxness í menntaskóla ?"
context: "Halldór Laxness ( Halldór Kiljan ) fæddist í Reykjavík 23. apríl árið 1902 og átti í fyrstu heima við Laugaveg en árið 1905 settist fjölskyldan að í Laxnesi í Mosfellssveit . Þar ólst Halldór upp en sótti skóla í Reykjavík á unglingsárum . Ungur hélt hann síðan utan og var langdvölum erlendis um árabil – í ýmsum Evrópulöndum og síðar í Ameríku . Þegar hann var heima bjó hann í Reykjavík þar til hann og kona hans , Auður Sveinsdóttir , byggðu sér húsið Gljúfrastein í Mosfellssveit og fluttu þangað árið 1945 . Þar var heimili þeirra alla tíð síðan og þar er nú safn til minningar um þau . Halldór lést 8. febrúar 1998 . Skólaganga Halldórs varð ekki löng . Árið 1918 hóf hann nám við Menntaskólann í Reykjavík en hafði lítinn tíma til að læra , enda var hann að skrifa skáldsögu , Barn náttúrunnar , sem kom út haustið 1919 – þá þegar var höfundurinn ungi farinn af landi brott . Sagan vakti þó nokkra athygli og í Alþýðublaðinu sagði m.a. : „ Og hver veit nema að Halldór frá Laxnesi eigi eftir að verða óskabarn íslensku þjóðarinnar . “ Upp frá þessu sendi Halldór frá sér bók nánast á hverju ári , stundum fleiri en eina , í yfir sex áratugi . Afköst hans voru með eindæmum ; hann skrifaði fjölda skáldsagna , sumar í nokkrum hlutum , leikrit , kvæði , smásagnasöfn og endurminningabækur og gaf auk þess út mörg greinasöfn og ritgerðir . Bækurnar eru fjölbreyttar en eiga það sameiginlegt að vera skrifaðar af einstakri stílgáfu , djúpum mannskilningi og víðtækri þekkingu á sögu og samfélagi . Þar birtast oft afgerandi skoðanir á þjóðfélagsmálum og sögupersónur eru margar einkar eftirminnilegar ; tilsvör þeirra og lunderni hafa orðið samofin þjóðarsálinni . Þekktustu verk Halldórs eru eflaust skáldsögurnar stóru og rismiklu , s.s. Salka Valka , Sjálfstætt fólk , Heimsljós , Íslandsklukkan og Gerpla , og raunar mætti telja upp mun fleiri ; Kvæðabók hans er í uppáhaldi hjá mörgum sem og minningabækurnar sem hann skrifaði á efri árum um æskuár sín ; af þekktum greinasöfnum og ritgerðum má nefna Alþýðubókina og Skáldatíma . Mikið hefur verið skrifað um verk og ævi skáldsins , en hér skal aðeins bent á ítarlega frásögn og greiningu Halldórs Guðmundssonar í bókinni Halldór Laxness – ævisaga ."
---
# TExAS-SQuAD-is
This model is a fine-tuned version of [IceBERT](https://huggingface.co/vesteinn/IceBERT) on the TExAS-SQuAD-is dataset.
It achieves the following results on the evaluation set:
- Exact match: xx.xx%
- F1-score: xx.xx%
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.5353 | 0.12 | 500 | 2.2356 |
| 2.364 | 0.24 | 1000 | 2.0607 |
| 2.2243 | 0.36 | 1500 | 2.0617 |
| 2.1403 | 0.49 | 2000 | 1.9934 |
| 2.1491 | 0.61 | 2500 | 2.0515 |
| 2.0604 | 0.73 | 3000 | 1.9602 |
| 2.0232 | 0.85 | 3500 | 1.8954 |
| 2.0905 | 0.97 | 4000 | 1.9474 |
| 1.9229 | 1.09 | 4500 | 1.9814 |
| 1.9162 | 1.22 | 5000 | 1.9053 |
| 1.8937 | 1.34 | 5500 | 1.9501 |
| 1.9085 | 1.46 | 6000 | 1.8882 |
| 1.8671 | 1.58 | 6500 | 1.8996 |
| 1.8997 | 1.7 | 7000 | 1.8340 |
| 1.8546 | 1.82 | 7500 | 1.8883 |
| 1.8935 | 1.95 | 8000 | 1.8567 |
| 1.7031 | 2.07 | 8500 | 1.9206 |
| 1.7699 | 2.19 | 9000 | 1.8790 |
| 1.7016 | 2.31 | 9500 | 1.8670 |
| 1.7744 | 2.43 | 10000 | 1.8951 |
| 1.7518 | 2.55 | 10500 | 1.9550 |
| 1.7503 | 2.68 | 11000 | 1.9120 |
| 1.7818 | 2.8 | 11500 | 1.8820 |
| 1.6955 | 2.92 | 12000 | 1.8908 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.8.1+cu101
- Datasets 1.12.1
- Tokenizers 0.10.3
|
saattrupdan/xlmr-base-texas-squad-fr | 3d4eaded28cec9d31aa6beb0d136697ee6c22821 | 2022-03-18T16:56:07.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | saattrupdan | null | saattrupdan/xlmr-base-texas-squad-fr | 1 | null | transformers | 30,232 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlmr-base-texas-squad-fr
results: []
widget:
- text: "Comment obtenir la coagulation?"
context: "La coagulation peut être obtenue soit par action d'une enzyme, la présure, soit par fermentation provoquée par des bactéries lactiques (le lactose est alors transformé en acide lactique), soit très fréquemment par combinaison des deux méthodes précédentes, soit par chauffage associé à une acidification directe (vinaigre…). On procède ensuite à l'égouttage. On obtient alors le caillé et le lactosérum. Le lactosérum peut aussi être utilisé directement : fromage de lactosérum comme le sérac, ou par réincorporation de ses composants."
---
# TExAS-SQuAD-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the TExAS-SQuAD-fr dataset.
It achieves the following results on the evaluation set:
- Exact match: xx.xx%
- F1-score: xx.xx%
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.1478 | 0.23 | 1000 | 1.8543 |
| 1.9827 | 0.46 | 2000 | 1.7643 |
| 1.8427 | 0.69 | 3000 | 1.6789 |
| 1.8372 | 0.92 | 4000 | 1.6137 |
| 1.7318 | 1.15 | 5000 | 1.6093 |
| 1.6603 | 1.38 | 6000 | 1.7157 |
| 1.6334 | 1.61 | 7000 | 1.6302 |
| 1.6716 | 1.84 | 8000 | 1.5845 |
| 1.5192 | 2.06 | 9000 | 1.6690 |
| 1.5174 | 2.29 | 10000 | 1.6669 |
| 1.4611 | 2.52 | 11000 | 1.6301 |
| 1.4648 | 2.75 | 12000 | 1.6009 |
| 1.5052 | 2.98 | 13000 | 1.6133 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.8.1+cu101
- Datasets 1.12.1
- Tokenizers 0.10.3
|
sachdevkartik/DialoGPT-small-rick | 0a14ed0f58b0d6937208a10ecc37edec3a42893e | 2021-10-20T20:14:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | sachdevkartik | null | sachdevkartik/DialoGPT-small-rick | 1 | null | transformers | 30,233 | ---
tags:
- conversational
---
# Rick and Morty DialoGPT Model |
saibo/random-albert-base-v2 | 4347f4131d6a2fdbc5f085dca70dd64077b550bb | 2021-07-18T18:33:22.000Z | [
"pytorch",
"tf",
"albert",
"feature-extraction",
"transformers"
] | feature-extraction | false | saibo | null | saibo/random-albert-base-v2 | 1 | null | transformers | 30,234 | # random-albert-base-v2
We introduce random-albert-base-v2, which is a unpretrained version of Albert model. The weight of random-albert-base-v2 is randomly initiated and this can be particularly useful when we aim to train a language model from scratch or benchmark the effect of pretraining.
It's important to note that tokenizer of random-albert-base-v2 is the same as albert-base-v2 because it's not a trivial task to get a random tokenizer and it's less meaningful compared to the random weight.
A debatable advantage of pulling random-albert-base-v2 from Huggingface is to avoid using random seed in order to obtain the same randomness at each time.
The code to obtain a such random model:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
def get_blank_model_from_hf(model_name="bert-base-cased"):
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=5)
tokenizer = AutoTokenizer.from_pretrained(model_name)
model.base_model.init_weights()
model_name = "random-" + model_name
base_model= model.base_model
return base_model, tokenizer, model_name
```
|
saibo/random-bert-base-cased | e883999974f1b35861b8e6c31b61f19c94ec9de2 | 2021-07-08T12:50:14.000Z | [
"pytorch",
"tf",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | saibo | null | saibo/random-bert-base-cased | 1 | null | transformers | 30,235 | Entry not found |
sail/poolformer_m36 | d8dbe79affcac9d5bbe80c702d327c8af09523ec | 2022-04-08T07:49:03.000Z | [
"pytorch",
"poolformer",
"image-classification",
"dataset:imagenet",
"arxiv:2111.11418",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | sail | null | sail/poolformer_m36 | 1 | null | transformers | 30,236 | ---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
---
# PoolFormer (M36 model)
PoolFormer model trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu et al. and first released in [this repository](https://github.com/sail-sg/poolformer).
## Model description
PoolFormer is a model that replaces attention token mixer in transfomrers with extremely simple operator, pooling.
Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in transformers can be replaced by spatial MLPs and the resulted models still perform quite well. Based on this observation, we hypothesize that the general architecture of the transformers, instead of the specific token mixer module, is more essential to the model's performance. To verify this, we deliberately replace the attention module in transformers with an embarrassingly simple spatial pooling operator to conduct only the most basic token mixing. Surprisingly, we observe that the derived model, termed as PoolFormer, achieves competitive performance on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned vision transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with 35%/52% fewer parameters and 48%/60% fewer MACs. The effectiveness of PoolFormer verifies our hypothesis and urges us to initiate the concept of "MetaFormer", a general architecture abstracted from transformers without specifying the token mixer. Based on the extensive experiments, we argue that MetaFormer is the key player in achieving superior results for recent transformer and MLP-like models on vision tasks. This work calls for more future research dedicated to improving MetaFormer instead of focusing on the token mixer modules. Additionally, our proposed PoolFormer could serve as a starting baseline for future MetaFormer architecture design.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=sail/poolformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import PoolFormerFeatureExtractor, PoolFormerForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = PoolFormerFeatureExtractor.from_pretrained('sail/poolformer_m36')
model = PoolFormerForImageClassification.from_pretrained('sail/poolformer_m36')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The poolformer model was trained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/sail-sg/poolformer/blob/main/train.py#L529-L572).
### Pretraining
The model was trained on TPU-v3s. Training resolution is 224. For all hyperparameters (such as batch size and learning rate), please refer to the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | # params | URL |
|---------------------------------------|-------------------------|----------|------------------------------------------------------------------|
| PoolFormer-S12 | 77.2 | 12M | https://huggingface.co/sail/poolformer_s12 |
| PoolFormer-S24 | 80.3 | 21M | https://huggingface.co/sail/poolformer_s24 |
| PoolFormer-S36 | 81.4 | 31M | https://huggingface.co/sail/poolformer_s36 |
| **PoolFormer-M36** | **82.1** | **56M** | **https://huggingface.co/sail/poolformer_m36** |
| PoolFormer-M48 | 82.5 | 73M | https://huggingface.co/sail/poolformer_m48 |
### BibTeX entry and citation info
```bibtex
@article{yu2021metaformer,
title={MetaFormer is Actually What You Need for Vision},
author={Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng},
journal={arXiv preprint arXiv:2111.11418},
year={2021}
}
``` |
sakai026/Mizuhara | 8d3d6eca7f9641efad53eb48cfd4aa67365b1143 | 2022-02-08T16:56:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | sakai026 | null | sakai026/Mizuhara | 1 | null | transformers | 30,237 | ---
tags:
- conversational
---
# Mizuhara Chizuru bot |
sakharok/lapka | 5d24dd5e4c2173d668b4d5775a34aad723fac09d | 2021-11-16T11:12:36.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | sakharok | null | sakharok/lapka | 1 | null | transformers | 30,238 | Entry not found |
sam213/DialoGPT-small-harrypotter | 73ec39dfbc1c07d3369ddd195fe1c53afc2625ca | 2021-11-25T13:11:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | sam213 | null | sam213/DialoGPT-small-harrypotter | 1 | null | transformers | 30,239 | ---
tags:
- conversational
---
# Harry Potter DialoGPT model |
samantharhay/wav2vec2-base-myst-demo-colab | c3f26ddf868463479c07deaaab79bb7796e8c1d1 | 2021-11-22T18:15:21.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | samantharhay | null | samantharhay/wav2vec2-base-myst-demo-colab | 1 | null | transformers | 30,240 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: wav2vec2-base-myst-demo-colab
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-myst-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.3125
- eval_wer: 0.3139
- eval_runtime: 57.3226
- eval_samples_per_second: 9.996
- eval_steps_per_second: 1.256
- epoch: 18.68
- step: 17000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
sammy786/wav2vec2-xlsr-Basaa | 8326b66e9bf276f93f862f575e4bd70b3b7be395 | 2022-03-24T11:54:12.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"bas",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sammy786 | null | sammy786/wav2vec2-xlsr-Basaa | 1 | null | transformers | 30,241 | ---
language:
- bas
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- bas
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-basaa
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: bas
metrics:
- name: Test WER
type: wer
value: 41.23
- name: Test CER
type: cer
value: 13.54
---
# sammy786/wav2vec2-xlsr-basaa
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - bas dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 21.39
- Wer: 30.99
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 70
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 200 | 6.734100 | 1.605006 | 0.980456 |
| 400 | 1.011200 | 0.364686 | 0.442997 |
| 600 | 0.709300 | 0.300204 | 0.377850 |
| 800 | 0.469800 | 0.315612 | 0.405537 |
| 1000 | 0.464700 | 0.352494 | 0.372964 |
| 1200 | 0.421900 | 0.342533 | 0.368078 |
| 1400 | 0.401900 | 0.351398 | 0.343648 |
| 1600 | 0.429800 | 0.350570 | 0.348534 |
| 1800 | 0.352600 | 0.356601 | 0.358306 |
| 2000 | 0.387200 | 0.355814 | 0.356678 |
| 2200 | 0.362400 | 0.345573 | 0.355049 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-basaa --dataset mozilla-foundation/common_voice_8_0 --config bas --split test
``` |
sammy786/wav2vec2-xlsr-estonian | 5632b09e7541f463292eebcaced783a4c7ebc643 | 2022-03-24T11:56:08.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"et",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sammy786 | null | sammy786/wav2vec2-xlsr-estonian | 1 | null | transformers | 30,242 | ---
language:
- et
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- et
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-estonian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: et
metrics:
- name: Test WER
type: wer
value: 23.61
- name: Test CER
type: cer
value: 4.6
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: et
metrics:
- name: Test WER
type: wer
value: 61.83
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: et
metrics:
- name: Test WER
type: wer
value: 67.43
---
# sammy786/wav2vec2-xlsr-estonian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - et dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 17.94
- Wer: 30.38
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 8
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 200 | 3.729100 | 1.096018 | 0.959867 |
| 400 | 0.996900 | 0.310228 | 0.443600 |
| 600 | 0.762900 | 0.210873 | 0.346117 |
| 800 | 0.621400 | 0.200381 | 0.331513 |
| 1000 | 0.408000 | 0.196382 | 0.322014 |
| 1200 | 0.320200 | 0.176281 | 0.312515 |
| 1400 | 0.315300 | 0.179433 | 0.303847 |
| 1600 | 0.445800 | 0.420985 | 0.315839 |
| 1800 | 0.644600 | 0.433833 | 0.354904 |
| 2000 | 0.550900 | 0.327117 | 0.336500 |
| 2200 | 0.498600 | 0.289830 | 0.325457 |
| 2400 | 0.488300 | 0.294309 | 0.314177 |
| 2600 | 0.491700 | 0.311175 | 0.318689 |
| 2800 | 0.508500 | 0.314744 | 0.320470 |
| 3000 | 0.499900 | 0.314834 | 0.320589 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-estonian --dataset mozilla-foundation/common_voice_8_0 --config et --split test
``` |
sammy786/wav2vec2-xlsr-kyrgyz | 1a6f3fd9bed2e342576e0d2f4ffe4e1d7b9a0843 | 2022-03-24T11:58:41.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ky",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sammy786 | null | sammy786/wav2vec2-xlsr-kyrgyz | 1 | null | transformers | 30,243 | ---
language:
- ky
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- ky
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-kyrgyz
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ky
metrics:
- name: Test WER
type: wer
value: 25.24
- name: Test CER
type: cer
value: 6.25
---
# sammy786/wav2vec2-xlsr-kyrgyz
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ky dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 43.06
- Wer: 39.19
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 8
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 200 | 5.357800 | 2.700367 | 1.000000 |
| 400 | 1.513600 | 0.642542 | 0.598820 |
| 600 | 0.961900 | 0.530665 | 0.502739 |
| 800 | 0.776000 | 0.507709 | 0.462705 |
| 1000 | 0.646100 | 0.453115 | 0.444164 |
| 1200 | 0.581200 | 0.454797 | 0.438264 |
| 1400 | 0.437900 | 0.459389 | 0.426464 |
| 1600 | 0.348600 | 0.401247 | 0.416351 |
| 1800 | 0.312800 | 0.436135 | 0.409608 |
| 2000 | 0.294100 | 0.440911 | 0.398651 |
| 2200 | 0.281400 | 0.432729 | 0.394016 |
| 2400 | 0.258400 | 0.429860 | 0.393595 |
| 2600 | 0.263700 | 0.432689 | 0.395280 |
| 2800 | 0.256900 | 0.430672 | 0.391909 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-kyrgyz --dataset mozilla-foundation/common_voice_8_0 --config ky --split test
``` |
sana-ngu/HaT5 | 040a358618e63a31b4d1683d847354850c152394 | 2022-05-20T16:53:35.000Z | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:2202.05690",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sana-ngu | null | sana-ngu/HaT5 | 1 | null | transformers | 30,244 | ### HaT5(T5-base)
This is a fine-tuned model of T5 (base) on the hate speech detection dataset. It is intended to be used as a classification model for identifying Tweets (0 - HOF(hate/offensive); 1 - NOT). The task prefix we used for the T5 model is 'classification: '.
More information about the original pre-trained model can be found [here](https://huggingface.co/t5-base)
Classification examples:
|Prediction|Tweet|
|-----|--------|
|0 |Why the fuck I got over 1000 views on my story 😂😂 nothing new over here |
|1. |first of all there is no vaccine to cure , whthr it is capsules, tablets, or injections, they just support to fight with d virus. I do not support people taking any kind of home remedies n making fun of an ayurvedic medicine..😐 |
# More Details
For more details about the datasets and eval results, see [our paper for this work here](https://arxiv.org/abs/2202.05690)
The paper was accepted at the International Joint Conference on Neural Networks (IJCNN) conference 2022.
# How to use
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
model = T5ForConditionalGeneration.from_pretrained("sana-ngu/HaT5")
tokenizer = T5Tokenizer.from_pretrained("t5-base")
tokenizer.pad_token = tokenizer.eos_token
input_ids = tokenizer("Old lions in the wild lay down and die with dignity when they can't hunt anymore. If a government is having 'teething problems' handling aid supplies one full year into a pandemic, maybe it should take a cue and get the fuck out of the way? ", padding=True, truncation=True, return_tensors='pt').input_ids
outputs = model.generate(input_ids)
pred = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(pred)
```
|
santhoshkolloju/t5_qg_model_with_answer2 | 08976aaa3aaddd5c2203bb8ff3874b173a49d924 | 2021-06-23T14:08:52.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | santhoshkolloju | null | santhoshkolloju/t5_qg_model_with_answer2 | 1 | null | transformers | 30,245 | Entry not found |
santhoshkolloju/t5_qg_multi2 | 612f9d9bd7c970af8daec596ff5ab1140b6df6e0 | 2020-07-05T11:13:54.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | santhoshkolloju | null | santhoshkolloju/t5_qg_multi2 | 1 | null | transformers | 30,246 | Entry not found |
saraks/cuad-distil-agreement_date-08-25 | 92b7b3ea9fdbf178bbace3d02b1c7a93bacc4612 | 2021-08-25T10:36:00.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saraks | null | saraks/cuad-distil-agreement_date-08-25 | 1 | null | transformers | 30,247 | Entry not found |
saraks/cuad-distil-agreement_date-08-31-v1 | f866a149f696347c430770f542b37e66edb8a7c8 | 2021-08-31T07:17:51.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saraks | null | saraks/cuad-distil-agreement_date-08-31-v1 | 1 | null | transformers | 30,248 | Entry not found |
saraks/cuad-distil-effective_date-08-31-v1 | 45ed4e371f034918bb5787017cb011bad5ee73b1 | 2021-08-31T06:55:45.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saraks | null | saraks/cuad-distil-effective_date-08-31-v1 | 1 | null | transformers | 30,249 | Entry not found |
saraks/cuad-distil-multi_fields-08-29-v1 | 403caf349b9b2e6230850eb90b40e43f7d70bd01 | 2021-08-29T05:09:46.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saraks | null | saraks/cuad-distil-multi_fields-08-29-v1 | 1 | 2 | transformers | 30,250 | Entry not found |
saraks/cuad-distil-parties-dates-law-08-18-id-question2 | bab00b8b33373d3f7e4f0d6e13ff077471df4582 | 2021-08-18T17:50:29.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saraks | null | saraks/cuad-distil-parties-dates-law-08-18-id-question2 | 1 | null | transformers | 30,251 | Entry not found |
saraks/cuad-distil-parties-dates-law-08-18 | 1b70e1425df24077ad0bc99e6ddf0e73bef7a478 | 2021-08-18T15:11:47.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saraks | null | saraks/cuad-distil-parties-dates-law-08-18 | 1 | null | transformers | 30,252 | Entry not found |
sardinaerum/mt5 | 8c51e6ffd1c617a726645c7cda7a54f326550cfd | 2022-02-10T09:24:30.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sardinaerum | null | sardinaerum/mt5 | 1 | null | transformers | 30,253 | Entry not found |
sbiswal/odia-bert-classifier | 518f78f3643441d69c886a0f2e41ed9e8fe98916 | 2021-05-20T05:06:25.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sbiswal | null | sbiswal/odia-bert-classifier | 1 | null | transformers | 30,254 | Entry not found |
seantyh/CxLM | fe516648310f2f5a7b8766b4a133d6d7b6cc7665 | 2022-01-08T11:23:01.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seantyh | null | seantyh/CxLM | 1 | null | transformers | 30,255 | Entry not found |
sebastiaan/sentence-BERT-combined | 0a3beb11dcc3546788410da91efd7e8420d24e92 | 2021-12-17T12:56:43.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | sebastiaan | null | sebastiaan/sentence-BERT-combined | 1 | null | transformers | 30,256 | Entry not found |
seccily/wav2vec-lt-lite | c66faefa3b04ad2f016410567c8178544f531eaf | 2021-04-06T05:40:27.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"lt",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | seccily | null | seccily/wav2vec-lt-lite | 1 | null | transformers | 30,257 | ---
language: lt
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Lithuanian by Seçilay KUTAL
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice lt
type: common_voice
args: lt
metrics:
- name: Test WER
type: wer
---
# wav2vec-lt-lite
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("seccily/wav2vec-lt-lite")
model = Wav2Vec2ForCTC.from_pretrained("seccily/wav2vec-lt-lite")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
```
Test Result: 59.47 |
seduerr/fuser | 737620f6c3834fb7afb51841256df21b6329f712 | 2021-06-02T14:56:42.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seduerr | null | seduerr/fuser | 1 | null | transformers | 30,258 | Entry not found |
seduerr/pai_ei | 591151505615a113f297e5f7735dbbb06d49c69c | 2021-06-22T08:45:38.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seduerr | null | seduerr/pai_ei | 1 | null | transformers | 30,259 | Entry not found |
seduerr/pai_infi | 49d45ac27ee3f9413f3333b2adbbb85c295f5d91 | 2021-05-23T12:49:38.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | seduerr | null | seduerr/pai_infi | 1 | null | transformers | 30,260 | Entry not found |
seduerr/pai_splitter_short | 1951495dd41c2c4187407b7a764444b712a66d0c | 2021-05-09T20:32:03.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seduerr | null | seduerr/pai_splitter_short | 1 | null | transformers | 30,261 | Entry not found |
seduerr/pai_subject | b482f8fdf597430e547fa3f0bee8f0b0cd3914f2 | 2021-06-09T10:20:17.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seduerr | null | seduerr/pai_subject | 1 | null | transformers | 30,262 | Entry not found |
sergiyvl/just_first_try_to_my_diplom_onBert_10epoch | 6abb7982e2a3379f293da4bd140732a483c8a19a | 2021-05-20T05:38:45.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sergiyvl | null | sergiyvl/just_first_try_to_my_diplom_onBert_10epoch | 1 | null | transformers | 30,263 | Entry not found |
sergiyvl/just_first_try_to_my_diplom_onBert_minea_2epoch | 4d359c5bb98eb8aef440a99c695a8210714f8d04 | 2021-05-20T05:39:53.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sergiyvl | null | sergiyvl/just_first_try_to_my_diplom_onBert_minea_2epoch | 1 | null | transformers | 30,264 | Entry not found |
sergiyvl/model_65000_20ep | dd49dbbf6df9dbfa996b6c61f0d7250e6702ef84 | 2021-05-20T05:41:04.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sergiyvl | null | sergiyvl/model_65000_20ep | 1 | null | transformers | 30,265 | Entry not found |
severinsimmler/bert-adapted-german-press | 0a1e53d77e98f2d4e46360b5ed3d791ac90e11d6 | 2021-05-20T05:44:48.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
] | null | false | severinsimmler | null | severinsimmler/bert-adapted-german-press | 1 | null | transformers | 30,266 | Entry not found |
seyfullah/dummy-model | a322a025a1d2cbb656dca1986d147530eecce732 | 2021-07-12T16:59:02.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyfullah | null | seyfullah/dummy-model | 1 | null | transformers | 30,267 | Entry not found |
seyonec/BPE_SELFIES_PubChem_shard00_120k | 2a41924a40dd940465008bf81e965c208a9a0f96 | 2021-05-20T20:44:11.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/BPE_SELFIES_PubChem_shard00_120k | 1 | null | transformers | 30,268 | Entry not found |
seyonec/ChemBERTA_PubChem1M_shard00_115k | 490f098ed4077a874625aef90bacb3494fb3d4d6 | 2021-05-20T20:51:44.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/ChemBERTA_PubChem1M_shard00_115k | 1 | null | transformers | 30,269 | Entry not found |
seyonec/PubChem10M_SMILES_BPE_120k | 0604ff11516ea03ea790ba36c6baef8846b20de5 | 2021-05-20T20:58:35.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/PubChem10M_SMILES_BPE_120k | 1 | null | transformers | 30,270 | Entry not found |
seyonec/PubChem10M_SMILES_BPE_180k | d753fc53e3b5ae376199f900485bdac78f2a402e | 2021-05-20T20:59:23.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/PubChem10M_SMILES_BPE_180k | 1 | null | transformers | 30,271 | Entry not found |
seyonec/PubChem10M_SMILES_BPE_240k | a5ca04e0b1cee48f89eb282586c6435b6df7fb81 | 2021-05-20T21:00:08.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/PubChem10M_SMILES_BPE_240k | 1 | null | transformers | 30,272 | Entry not found |
seyonec/PubChem10M_SMILES_BPE_60k | 067d49c4b502ebcefc6205601d5992ea90a8f705 | 2021-05-20T21:04:12.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/PubChem10M_SMILES_BPE_60k | 1 | null | transformers | 30,273 | Entry not found |
seyonec/SMILES_tokenized_PubChem_shard00_100k | 271fa63286458fadfe2466308bd609f14571ed52 | 2021-05-20T21:06:51.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/SMILES_tokenized_PubChem_shard00_100k | 1 | null | transformers | 30,274 | Entry not found |
seyonec/SMILES_tokenized_PubChem_shard00_150k | 70ede323955e42f3745e732f53a2a0c567942f8a | 2021-05-20T21:07:44.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/SMILES_tokenized_PubChem_shard00_150k | 1 | null | transformers | 30,275 | Entry not found |
seyonec/SMILES_tokenized_PubChem_shard00_40k | 24d64e652bcbc361d0b062c99a8b63817c425be5 | 2021-05-20T21:09:40.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/SMILES_tokenized_PubChem_shard00_40k | 1 | null | transformers | 30,276 | Entry not found |
seyonec/checkpoint-50000 | b0f3f3897851c9f08129a003203e8069a2a73ba7 | 2021-05-20T21:12:19.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/checkpoint-50000 | 1 | null | transformers | 30,277 | Entry not found |
shacharm/wav2vec2-large-xls-r-300m-english-colab | 82357556ddbe88c4f64f6d3fb65dbf10f4861b3e | 2022-02-05T11:59:13.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | shacharm | null | shacharm/wav2vec2-large-xls-r-300m-english-colab | 1 | null | transformers | 30,278 | Entry not found |
shahukareem/xls-r-300m-dv | 7a61c488dd8f3c22c41bac962266d7fc00f3ef0c | 2022-03-23T18:34:14.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dv",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | shahukareem | null | shahukareem/xls-r-300m-dv | 1 | null | transformers | 30,279 | ---
language:
- dv
license: apache-2.0
tags:
- automatic-speech-recognition
- dv
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Dhivehi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: dv
metrics:
- name: Test WER
type: wer
value: 21.31
- name: Test CER
type: cer
value: 3.82
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-dv
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2855
- Wer: 0.2665
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.3386 | 0.66 | 400 | 1.1411 | 0.9432 |
| 0.6543 | 1.33 | 800 | 0.5099 | 0.6749 |
| 0.4646 | 1.99 | 1200 | 0.4133 | 0.5968 |
| 0.3748 | 2.65 | 1600 | 0.3534 | 0.5515 |
| 0.3323 | 3.32 | 2000 | 0.3635 | 0.5527 |
| 0.3269 | 3.98 | 2400 | 0.3587 | 0.5423 |
| 0.2984 | 4.64 | 2800 | 0.3340 | 0.5073 |
| 0.2841 | 5.31 | 3200 | 0.3279 | 0.5004 |
| 0.2664 | 5.97 | 3600 | 0.3114 | 0.4845 |
| 0.2397 | 6.63 | 4000 | 0.3174 | 0.4920 |
| 0.2332 | 7.3 | 4400 | 0.3110 | 0.4911 |
| 0.2304 | 7.96 | 4800 | 0.3123 | 0.4785 |
| 0.2134 | 8.62 | 5200 | 0.2984 | 0.4557 |
| 0.2066 | 9.29 | 5600 | 0.3013 | 0.4723 |
| 0.1951 | 9.95 | 6000 | 0.2934 | 0.4487 |
| 0.1806 | 10.61 | 6400 | 0.2802 | 0.4547 |
| 0.1727 | 11.28 | 6800 | 0.2842 | 0.4333 |
| 0.1666 | 11.94 | 7200 | 0.2873 | 0.4272 |
| 0.1562 | 12.6 | 7600 | 0.3042 | 0.4373 |
| 0.1483 | 13.27 | 8000 | 0.3122 | 0.4313 |
| 0.1465 | 13.93 | 8400 | 0.2760 | 0.4226 |
| 0.1335 | 14.59 | 8800 | 0.3112 | 0.4243 |
| 0.1293 | 15.26 | 9200 | 0.3002 | 0.4133 |
| 0.1264 | 15.92 | 9600 | 0.2985 | 0.4145 |
| 0.1179 | 16.58 | 10000 | 0.2925 | 0.4012 |
| 0.1171 | 17.25 | 10400 | 0.3127 | 0.4012 |
| 0.1141 | 17.91 | 10800 | 0.2980 | 0.3908 |
| 0.108 | 18.57 | 11200 | 0.3108 | 0.3951 |
| 0.1045 | 19.24 | 11600 | 0.3269 | 0.3908 |
| 0.1047 | 19.9 | 12000 | 0.2998 | 0.3868 |
| 0.0937 | 20.56 | 12400 | 0.2918 | 0.3875 |
| 0.0949 | 21.23 | 12800 | 0.2906 | 0.3657 |
| 0.0879 | 21.89 | 13200 | 0.2974 | 0.3731 |
| 0.0854 | 22.55 | 13600 | 0.2943 | 0.3711 |
| 0.0851 | 23.22 | 14000 | 0.2919 | 0.3580 |
| 0.0789 | 23.88 | 14400 | 0.2983 | 0.3560 |
| 0.0796 | 24.54 | 14800 | 0.3131 | 0.3544 |
| 0.0761 | 25.21 | 15200 | 0.2996 | 0.3616 |
| 0.0755 | 25.87 | 15600 | 0.2972 | 0.3506 |
| 0.0726 | 26.53 | 16000 | 0.2902 | 0.3474 |
| 0.0707 | 27.2 | 16400 | 0.3083 | 0.3480 |
| 0.0669 | 27.86 | 16800 | 0.3035 | 0.3330 |
| 0.0637 | 28.52 | 17200 | 0.2963 | 0.3370 |
| 0.0596 | 29.19 | 17600 | 0.2830 | 0.3326 |
| 0.0583 | 29.85 | 18000 | 0.2969 | 0.3287 |
| 0.0566 | 30.51 | 18400 | 0.3002 | 0.3480 |
| 0.0574 | 31.18 | 18800 | 0.2916 | 0.3296 |
| 0.0536 | 31.84 | 19200 | 0.2933 | 0.3225 |
| 0.0548 | 32.5 | 19600 | 0.2900 | 0.3179 |
| 0.0506 | 33.17 | 20000 | 0.3073 | 0.3225 |
| 0.0511 | 33.83 | 20400 | 0.2925 | 0.3275 |
| 0.0483 | 34.49 | 20800 | 0.2919 | 0.3245 |
| 0.0456 | 35.16 | 21200 | 0.2859 | 0.3105 |
| 0.0445 | 35.82 | 21600 | 0.2864 | 0.3080 |
| 0.0437 | 36.48 | 22000 | 0.2989 | 0.3084 |
| 0.04 | 37.15 | 22400 | 0.2887 | 0.3060 |
| 0.0406 | 37.81 | 22800 | 0.2870 | 0.3013 |
| 0.0397 | 38.47 | 23200 | 0.2793 | 0.3020 |
| 0.0383 | 39.14 | 23600 | 0.2955 | 0.2943 |
| 0.0345 | 39.8 | 24000 | 0.2813 | 0.2905 |
| 0.0331 | 40.46 | 24400 | 0.2845 | 0.2845 |
| 0.0338 | 41.13 | 24800 | 0.2832 | 0.2925 |
| 0.0333 | 41.79 | 25200 | 0.2889 | 0.2849 |
| 0.0325 | 42.45 | 25600 | 0.2808 | 0.2847 |
| 0.0314 | 43.12 | 26000 | 0.2867 | 0.2801 |
| 0.0288 | 43.78 | 26400 | 0.2865 | 0.2834 |
| 0.0291 | 44.44 | 26800 | 0.2863 | 0.2806 |
| 0.0269 | 45.11 | 27200 | 0.2941 | 0.2736 |
| 0.0275 | 45.77 | 27600 | 0.2897 | 0.2736 |
| 0.0271 | 46.43 | 28000 | 0.2857 | 0.2695 |
| 0.0251 | 47.1 | 28400 | 0.2881 | 0.2702 |
| 0.0243 | 47.76 | 28800 | 0.2901 | 0.2684 |
| 0.0244 | 48.42 | 29200 | 0.2849 | 0.2679 |
| 0.0232 | 49.09 | 29600 | 0.2849 | 0.2677 |
| 0.0224 | 49.75 | 30000 | 0.2855 | 0.2665 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
shaina/covid_qa_distillBert | d892b228dc4efc5854ffae894407ce6758bfe751 | 2022-01-06T15:41:08.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:covid_qa_deepset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | shaina | null | shaina/covid_qa_distillBert | 1 | null | transformers | 30,280 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
widget:
- text: "What is COVID-19?"
context: "Coronavirus disease 2019 (COVID-19) is a contagious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The first known case was identified in Wuhan, China, in December 2019.[7] The disease has since spread worldwide, leading to an ongoing pandemic."
- text: "Where was COVID-19 first discovered?"
context: "The first known infections from SARS-CoV-2 were discovered in Wuhan, China. The original source of viral transmission to humans remains unclear, as does whether the virus became pathogenic before or after the spillover event."
- text: "What is Post-COVID syndrome?"
context: "Long COVID, also known as post-COVID-19 syndrome, post-acute sequelae of COVID-19 (PASC), or chronic COVID syndrome (CCS) is a condition characterized by long-term sequelae appearing or persisting after the typical convalescence period of COVID-19. Long COVID can affect nearly every organ system, with sequelae including respiratory system disorders, nervous system and neurocognitive disorders, mental health disorders, metabolic disorders, cardiovascular disorders, gastrointestinal disorders, malaise, fatigue, musculoskeletal pain, and anemia. A wide range of symptoms are commonly reported, including fatigue, headaches, shortness of breath, anosmia (loss of smell), parosmia (distorted smell), muscle weakness, low fever and cognitive dysfunction."
model-index:
- name: CoQUAD_DistilBERT_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid_qa_distillBert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the covid_qa_deepset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.2537 | 1.0 | 3880 | 0.1871 |
| 0.2005 | 2.0 | 7760 | 0.1257 |
| 0.1395 | 3.0 | 11640 | 0.0971 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
shaina/covid_qa_mpnet | ddb1b7263ac1b28fe2bd0bed48b103bc1c97e636 | 2022-02-02T14:33:18.000Z | [
"pytorch",
"tensorboard",
"mpnet",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | question-answering | false | shaina | null | shaina/covid_qa_mpnet | 1 | null | transformers | 30,281 | ---
tags:
- generated_from_trainer
widget:
- text: "What is COVID-19?"
context: "Coronavirus disease 2019 (COVID-19) is a contagious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The first known case was identified in Wuhan, China, in December 2019.[7] The disease has since spread worldwide, leading to an ongoing pandemic."
- text: "Where was COVID-19 first discovered?"
context: "The first known infections from SARS-CoV-2 were discovered in Wuhan, China. The original source of viral transmission to humans remains unclear, as does whether the virus became pathogenic before or after the spillover event."
- text: "What is Post-COVID syndrome?"
context: "Long COVID, also known as post-COVID-19 syndrome, post-acute sequelae of COVID-19 (PASC), or chronic COVID syndrome (CCS) is a condition characterized by long-term sequelae appearing or persisting after the typical convalescence period of COVID-19. Long COVID can affect nearly every organ system, with sequelae including respiratory system disorders, nervous system and neurocognitive disorders, mental health disorders, metabolic disorders, cardiovascular disorders, gastrointestinal disorders, malaise, fatigue, musculoskeletal pain, and anemia. A wide range of symptoms are commonly reported, including fatigue, headaches, shortness of breath, anosmia (loss of smell), parosmia (distorted smell), muscle weakness, low fever and cognitive dysfunction."
---
# covid_qa_mpnet
This model is a fine-tuned version of [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on our COVID-19 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2477 | 1.0 | 3895 | 0.1869 |
| 0.1838 | 2.0 | 7790 | 0.1352 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
shashank2123/t5-base-fine-tuned-for-Punctuation-Restoration | 66639f6e58e23592ed8d599d8ccb4a69df581502 | 2021-09-13T14:42:51.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | shashank2123 | null | shashank2123/t5-base-fine-tuned-for-Punctuation-Restoration | 1 | 1 | transformers | 30,282 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-fine-tuned-for-Punctuation-Restoration
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-fine-tuned-for-Punctuation-Restoration
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1796 | 1.0 | 1431 | 0.1097 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
shibli/wav2vec2-large-xls-r-300m-pun-colab | d13f57a036ddbef66443c833385ce1ddc162b3e9 | 2022-02-22T18:51:07.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | shibli | null | shibli/wav2vec2-large-xls-r-300m-pun-colab | 1 | null | transformers | 30,283 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-pun-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-pun-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
shields/wav2vec2-base-dementiabank | 300e1183477c829246f5cde34e5156857c4ca8a8 | 2022-02-08T02:53:13.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | shields | null | shields/wav2vec2-base-dementiabank | 1 | null | transformers | 30,284 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-dementiabank
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-dementiabank
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 11.0473
- eval_wer: 1.0
- eval_runtime: 3.3353
- eval_samples_per_second: 2.399
- eval_steps_per_second: 0.3
- epoch: 3.12
- step: 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.5
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
shields/wav2vec2-xl-960h-dementiabank | e7d4fabf2abb2408456754a1b38283c229652a4f | 2022-01-21T06:00:54.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | shields | null | shields/wav2vec2-xl-960h-dementiabank | 1 | null | transformers | 30,285 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xl-960h-dementiabank
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xl-960h-dementiabank
This model is a fine-tuned version of [facebook/wav2vec2-large-960h](https://huggingface.co/facebook/wav2vec2-large-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3483.2146
- Wer: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 13934.5266 | 0.31 | 10 | 71265.4531 | 1.0 |
| 13443.6406 | 0.62 | 20 | 69977.6016 | 1.0 |
| 9336.9562 | 0.94 | 30 | 13763.1484 | 0.9843 |
| 2970.977 | 1.25 | 40 | 17587.7656 | 0.9860 |
| 1916.3354 | 1.56 | 50 | 4328.4521 | 1.0 |
| 1417.5775 | 1.88 | 60 | 4486.8071 | 0.9860 |
| 1841.7689 | 2.19 | 70 | 2988.0303 | 1.0 |
| 1355.0265 | 2.5 | 80 | 2972.6094 | 0.9860 |
| 1359.7979 | 2.81 | 90 | 3483.2146 | 0.9860 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
shimu/bert_base_uncased_finetuning | ea75e5227d3ab70c585d7b8f36824d3956ff1ce4 | 2021-09-08T02:57:36.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | shimu | null | shimu/bert_base_uncased_finetuning | 1 | null | transformers | 30,286 | Entry not found |
shivam/mbart-large-50-finetuned-en-mr | e390f334da0aa48527b18c4dc6123d2fc249c242 | 2021-04-18T10:19:52.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | shivam | null | shivam/mbart-large-50-finetuned-en-mr | 1 | null | transformers | 30,287 | ---
Language Pair Finetuned:
- en-mr
Metrics:
- sacrebleu
- WAT 2021: 16.11
# mbart-large-finetuned-en-mr
## Model Description
This is the mbart-large-50 model finetuned on En-Mr corpus.
## Intended uses and limitations
Mostly useful for English to Marathi translation but the mbart-large-50 model also supports other language pairs
### How to use
```python
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
model = MBartForConditionalGeneration.from_pretrained("shivam/mbart-large-50-finetuned-en-mr")
tokenizer = MBart50TokenizerFast.from_pretrained("shivam/mbart-large-50-finetuned-en-mr", src_lang="en_XX", tgt_lang="mr_IN")
english_input_sentence = "The Prime Minister said that cleanliness, or Swachhta, is one of the most important aspects of preventive healthcare."
model_inputs = tokenizer(english_input_sentence, return_tensors="pt")
generated_tokens = model.generate(
**model_inputs,
forced_bos_token_id=tokenizer.lang_code_to_id["mr_IN"]
)
marathi_output_sentence = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(marathi_output_sentence)
#स्वच्छता हा प्रतिबंधात्मक आरोग्य सेवेतील सर्वात महत्त्वाचा पैलू आहे, असे पंतप्रधान म्हणाले.
```
#### Limitations
The model was trained on Google Colab and as the training takes a lot of time the model was trained for small time and small number of epochs.
## Eval results
WAT 2021: 16.11 |
shivam/wav2vec2-xls-r-300m-marathi | d09e94a1b4a8d83d38de876e857d3b2e528893fa | 2022-02-07T15:40:53.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | shivam | null | shivam/wav2vec2-xls-r-300m-marathi | 1 | null | transformers | 30,288 | Entry not found |
shivangi/distilgpt2 | b4f81c40ef99f1cec62044be8ca7fc98ad560211 | 2021-05-23T12:52:44.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | shivangi | null | shivangi/distilgpt2 | 1 | null | transformers | 30,289 | Entry not found |
shivkumarganesh/distilbert-base-uncased-finetuned-squad | 7e1a7d2641c0184c5221f8be1691840ec8413c3f | 2021-11-05T07:25:27.000Z | [
"pytorch",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | shivkumarganesh | null | shivkumarganesh/distilbert-base-uncased-finetuned-squad | 1 | null | transformers | 30,290 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3036 | 1.0 | 4427 | 1.2414 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
shiyue/wav2vec2-common_voice-tr-demo | 93e63e22dd7eb500978fe1f4dc71a1f99b9e0175 | 2021-10-05T01:04:19.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | shiyue | null | shiyue/wav2vec2-common_voice-tr-demo | 1 | null | transformers | 30,291 | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-tr-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tr-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.7.1+cu110
- Datasets 1.12.1
- Tokenizers 0.10.3
|
shonuff/DialoGPT-medium-konosuba | eeca8d035c55a40d2b5871e83c53b63ec7d451a7 | 2021-08-28T00:56:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | shonuff | null | shonuff/DialoGPT-medium-konosuba | 1 | null | transformers | 30,292 | ---
tags:
- conversational
---
#Konosuba DialoGPT Model |
shoubhik/Wav2Vec2_XLSR_Bengali_10500_it | fb6715c14822dce81eebd872fe15e349513d95b8 | 2022-01-27T12:19:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | shoubhik | null | shoubhik/Wav2Vec2_XLSR_Bengali_10500_it | 1 | null | transformers | 30,293 | Entry not found |
shoubhik/wav2vec2-xls-r-300m-hindi | ed0ae52c905c272821a5b8b33de7d9d49b9af3fa | 2022-02-04T17:49:43.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | shoubhik | null | shoubhik/wav2vec2-xls-r-300m-hindi | 1 | null | transformers | 30,294 | Entry not found |
shpotes/xls-r-et | 63823ebec8cd7d1f3b94dfb96944892c6e002e9f | 2022-03-24T11:54:15.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"et",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | shpotes | null | shpotes/xls-r-et | 1 | null | transformers | 30,295 | ---
language:
- et
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- robust-speech-event
- et
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: ''
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: et
metrics:
- name: Test WER
type: wer
value: 0.34753420299077314
- name: Test CER
type: cer
value: 0.07542956089330906
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: et
metrics:
- name: Test WER
type: wer
value: 47.17
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: et
metrics:
- name: Test WER
type: wer
value: 54.72
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - ET dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4835
- Wer: 0.3475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 72
- eval_batch_size: 72
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 144
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3825 | 12.5 | 500 | 0.4022 | 0.5059 |
| 0.1592 | 25.0 | 1000 | 0.4585 | 0.4456 |
| 0.1215 | 37.5 | 1500 | 0.4550 | 0.4164 |
| 0.0972 | 50.0 | 2000 | 0.4725 | 0.4088 |
| 0.0731 | 62.5 | 2500 | 0.4568 | 0.3824 |
| 0.0527 | 75.0 | 3000 | 0.4712 | 0.3653 |
| 0.0428 | 87.5 | 3500 | 0.4813 | 0.3520 |
| 0.0383 | 100.0 | 4000 | 0.4835 | 0.3475 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
shpotes/xls-r-eus | 63d381ae33eeaf69cf84c52d13f58e48650fae38 | 2022-03-24T11:54:17.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"eu",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"et",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | shpotes | null | shpotes/xls-r-eus | 1 | null | transformers | 30,296 | ---
language:
- eu
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- et
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: xls-r-eus
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: eu
metrics:
- name: Test WER
type: wer
value: 0.17871523648578164
- name: Test CER
type: cer
value: 0.032624506085144
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - EU dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2278
- Wer: 0.1787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 72
- eval_batch_size: 72
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 144
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2548 | 4.24 | 500 | 0.2470 | 0.3663 |
| 0.1435 | 8.47 | 1000 | 0.2000 | 0.2791 |
| 0.1158 | 12.71 | 1500 | 0.2030 | 0.2652 |
| 0.1094 | 16.95 | 2000 | 0.2096 | 0.2605 |
| 0.1004 | 21.19 | 2500 | 0.2150 | 0.2477 |
| 0.0945 | 25.42 | 3000 | 0.2072 | 0.2369 |
| 0.0844 | 29.66 | 3500 | 0.1981 | 0.2328 |
| 0.0877 | 33.89 | 4000 | 0.2041 | 0.2425 |
| 0.0741 | 38.14 | 4500 | 0.2353 | 0.2421 |
| 0.0676 | 42.37 | 5000 | 0.2092 | 0.2213 |
| 0.0623 | 46.61 | 5500 | 0.2217 | 0.2250 |
| 0.0574 | 50.84 | 6000 | 0.2152 | 0.2179 |
| 0.0583 | 55.08 | 6500 | 0.2207 | 0.2186 |
| 0.0488 | 59.32 | 7000 | 0.2225 | 0.2159 |
| 0.0456 | 63.56 | 7500 | 0.2293 | 0.2031 |
| 0.041 | 67.79 | 8000 | 0.2277 | 0.2013 |
| 0.0379 | 72.03 | 8500 | 0.2287 | 0.1991 |
| 0.0381 | 76.27 | 9000 | 0.2233 | 0.1954 |
| 0.0308 | 80.51 | 9500 | 0.2195 | 0.1835 |
| 0.0291 | 84.74 | 10000 | 0.2266 | 0.1825 |
| 0.0266 | 88.98 | 10500 | 0.2285 | 0.1801 |
| 0.0266 | 93.22 | 11000 | 0.2292 | 0.1801 |
| 0.0262 | 97.46 | 11500 | 0.2278 | 0.1788 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
shreyasgite/wav2vec2-large-xls-r-300m-dementianet | 9e9abdb32361d84e99d3af97db28f23423471a75 | 2021-12-19T09:11:16.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | null | false | shreyasgite | null | shreyasgite/wav2vec2-large-xls-r-300m-dementianet | 1 | null | transformers | 30,297 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-large-xls-r-300m-dementianet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-dementianet
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3430
- Accuracy: 0.4062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 22
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3845 | 3.33 | 40 | 1.3556 | 0.3125 |
| 1.3659 | 6.67 | 80 | 1.3602 | 0.3125 |
| 1.3619 | 10.0 | 120 | 1.3569 | 0.3125 |
| 1.3575 | 13.33 | 160 | 1.3509 | 0.3125 |
| 1.3356 | 16.67 | 200 | 1.3599 | 0.3125 |
| 1.3166 | 20.0 | 240 | 1.3430 | 0.4062 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
shreyasgite/wav2vec2-large-xls-r-300m-dm32 | 64602323f168caab94d4a7e81f9567441f13210b | 2022-02-04T14:53:18.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | null | false | shreyasgite | null | shreyasgite/wav2vec2-large-xls-r-300m-dm32 | 1 | null | transformers | 30,298 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-large-xls-r-300m-dm32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-dm32
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5688
- Accuracy: 0.7917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 22
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 2.41 | 34 | 0.6769 | 0.6458 |
| No log | 4.83 | 68 | 0.6864 | 0.5208 |
| No log | 7.28 | 102 | 0.6596 | 0.6042 |
| 0.7106 | 9.69 | 136 | 0.6208 | 0.6875 |
| 0.7106 | 12.14 | 170 | 0.6152 | 0.6875 |
| 0.7106 | 14.55 | 204 | 0.6167 | 0.6875 |
| 0.6464 | 16.97 | 238 | 0.5782 | 0.7708 |
| 0.6464 | 19.41 | 272 | 0.6011 | 0.7292 |
| 0.6464 | 21.83 | 306 | 0.5688 | 0.7917 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
shreyasgite/wav2vec2-large-xls-r-300m-sanitycheck | b89e51d2251824509f292d4463f5b572eeef3efe | 2022-01-06T05:37:25.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | null | false | shreyasgite | null | shreyasgite/wav2vec2-large-xls-r-300m-sanitycheck | 1 | null | transformers | 30,299 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-large-xls-r-300m-sanitycheck
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-sanitycheck
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0092
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.14 | 8 | 0.8034 | 0.4737 |
| No log | 2.29 | 16 | 0.6803 | 0.5263 |
| No log | 3.43 | 24 | 0.4867 | 1.0 |
| 0.5907 | 4.57 | 32 | 0.1781 | 0.9474 |
| 0.5907 | 5.71 | 40 | 0.2168 | 0.9474 |
| 0.5907 | 6.86 | 48 | 0.2403 | 0.9474 |
| 0.5907 | 8.0 | 56 | 0.0143 | 1.0 |
| 0.0932 | 9.14 | 64 | 0.0124 | 1.0 |
| 0.0932 | 10.29 | 72 | 0.0089 | 1.0 |
| 0.0932 | 11.43 | 80 | 0.0092 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.