modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
anas-awadalla/splinter-large-few-shot-k-512-finetuned-squad-seed-0 | af181382e7602f86ce00927f6d98c55af7ee0bbf | 2022-05-14T22:18:10.000Z | [
"pytorch",
"splinter",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/splinter-large-few-shot-k-512-finetuned-squad-seed-0 | 1 | null | transformers | 31,900 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-large-few-shot-k-512-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-few-shot-k-512-finetuned-squad-seed-0
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
prashanth/mbart-large-cc25-ind_finetun-en-to-hi | d5832c9cd7ffffe53208a630cb1192ea23d996d0 | 2022-05-14T22:51:49.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"dataset:hindi_english_machine_translation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | prashanth | null | prashanth/mbart-large-cc25-ind_finetun-en-to-hi | 1 | null | transformers | 31,901 | ---
tags:
- generated_from_trainer
datasets:
- hindi_english_machine_translation
metrics:
- bleu
model-index:
- name: mbart-large-cc25-ind_finetun-en-to-hi
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: hindi_english_machine_translation
type: hindi_english_machine_translation
args: hi-en
metrics:
- name: Bleu
type: bleu
value: 7.8242
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-cc25-ind_finetun-en-to-hi
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the hindi_english_machine_translation dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8148
- Bleu: 7.8242
- Gen Len: 75.28
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 3.3247 | 1.0 | 620 | 1.8148 | 7.8242 | 75.28 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu102
- Datasets 1.18.0
- Tokenizers 0.12.1
|
anas-awadalla/roberta-large-few-shot-k-512-finetuned-squad-seed-2 | 148b7cb387cc19c27f76ccac006de688e368a554 | 2022-05-14T22:32:52.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-large-few-shot-k-512-finetuned-squad-seed-2 | 1 | null | transformers | 31,902 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-few-shot-k-512-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-512-finetuned-squad-seed-2
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/splinter-large-few-shot-k-512-finetuned-squad-seed-2 | 4670b2f69569769310458a6083072a9ea14f8dd3 | 2022-05-14T22:32:48.000Z | [
"pytorch",
"splinter",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/splinter-large-few-shot-k-512-finetuned-squad-seed-2 | 1 | null | transformers | 31,903 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-large-few-shot-k-512-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-few-shot-k-512-finetuned-squad-seed-2
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/roberta-large-few-shot-k-512-finetuned-squad-seed-4 | 1bf395a709e1ea7d1c76d52cf3aa8515d84ac5cf | 2022-05-14T22:47:13.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-large-few-shot-k-512-finetuned-squad-seed-4 | 1 | null | transformers | 31,904 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-few-shot-k-512-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-512-finetuned-squad-seed-4
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/roberta-large-few-shot-k-1024-finetuned-squad-seed-0 | a6d4edd4a4ccd97f788678212cd32219dfe65f03 | 2022-05-14T23:09:42.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-large-few-shot-k-1024-finetuned-squad-seed-0 | 1 | null | transformers | 31,905 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-few-shot-k-1024-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-1024-finetuned-squad-seed-0
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/splinter-large-few-shot-k-1024-finetuned-squad-seed-0 | 1ccebc624ef5aa3ebdc8b775945ae0f3173650a4 | 2022-05-14T23:09:42.000Z | [
"pytorch",
"splinter",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/splinter-large-few-shot-k-1024-finetuned-squad-seed-0 | 1 | null | transformers | 31,906 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-large-few-shot-k-1024-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-few-shot-k-1024-finetuned-squad-seed-0
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/roberta-large-few-shot-k-1024-finetuned-squad-seed-4 | 371560b2d4445a64d5e6fb8bfe36c50ed812a6fe | 2022-05-14T23:53:15.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-large-few-shot-k-1024-finetuned-squad-seed-4 | 1 | null | transformers | 31,907 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-few-shot-k-1024-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-1024-finetuned-squad-seed-4
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/splinter-large-few-shot-k-1024-finetuned-squad-seed-4 | fd57551e3c12db8cf5d6c4dedd0db863e0864449 | 2022-05-14T23:53:22.000Z | [
"pytorch",
"splinter",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/splinter-large-few-shot-k-1024-finetuned-squad-seed-4 | 1 | null | transformers | 31,908 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-large-few-shot-k-1024-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-few-shot-k-1024-finetuned-squad-seed-4
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/splinter-large-few-shot-k-512-finetuned-squad-seed-4 | 4a8eb1812f06873830676e4e7c5f1079e0e2aea3 | 2022-05-15T00:58:56.000Z | [
"pytorch",
"splinter",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/splinter-large-few-shot-k-512-finetuned-squad-seed-4 | 1 | null | transformers | 31,909 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-large-few-shot-k-512-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-few-shot-k-512-finetuned-squad-seed-4
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
dianeshan/dummy-model | 36b44284eab00efc0b94103724db63761f5ab255 | 2022-05-15T07:38:37.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dianeshan | null | dianeshan/dummy-model | 1 | null | transformers | 31,910 | Entry not found |
nandezgarcia/roberta-base-bne-finetuned-recores | 758830aeb2c9684d5517b4a5a77a50b1a61e72f8 | 2022-05-15T10:24:41.000Z | [
"pytorch",
"tensorboard",
"roberta",
"multiple-choice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | multiple-choice | false | nandezgarcia | null | nandezgarcia/roberta-base-bne-finetuned-recores | 1 | null | transformers | 31,911 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-recores
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-recores
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1113
- Accuracy: 0.4601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5294 | 1.0 | 1047 | 1.4094 | 0.4242 |
| 0.6886 | 2.0 | 2094 | 2.1629 | 0.4545 |
| 0.0779 | 3.0 | 3141 | 2.3083 | 0.4545 |
| 0.0103 | 4.0 | 4188 | 3.0327 | 0.4628 |
| 0.0019 | 5.0 | 5235 | 3.1113 | 0.4601 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
gary109/ai-light-dance_singing_ft_wav2vec2-large-lv60 | 38c72ce479c79b3854f818867bbc658cd31f739d | 2022-05-28T05:25:32.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:ai_light_dance",
"transformers",
"AI_Light_Dance.py",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_singing_ft_wav2vec2-large-lv60 | 1 | 1 | transformers | 31,912 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- AI_Light_Dance.py
- generated_from_trainer
datasets:
- ai_light_dance
model-index:
- name: ai-light-dance_singing_ft_wav2vec2-large-lv60
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing_ft_wav2vec2-large-lv60
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the AI_LIGHT_DANCE.PY - ONSET-SINGING dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4542
- Wer: 0.2088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.7432 | 1.0 | 4422 | 0.8939 | 0.6323 |
| 0.5484 | 2.0 | 8844 | 0.6393 | 0.3557 |
| 0.3919 | 3.0 | 13266 | 0.5315 | 0.2833 |
| 0.421 | 4.0 | 17688 | 0.5234 | 0.2522 |
| 0.3957 | 5.0 | 22110 | 0.5125 | 0.2247 |
| 0.3228 | 6.0 | 26532 | 0.4542 | 0.2088 |
| 0.346 | 7.0 | 30954 | 0.4673 | 0.1997 |
| 0.1637 | 8.0 | 35376 | 0.4583 | 0.1910 |
| 0.1508 | 9.0 | 39798 | 0.4623 | 0.1837 |
| 0.1564 | 10.0 | 44220 | 0.4717 | 0.1835 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2.dev0
- Tokenizers 0.12.1
|
CEBaB/bert-base-uncased.CEBaB.causalm.ambiance.2-class.exclusive.seed_42 | 7b5db9b349a6389ac3950ef7605744bc7b1975e3 | 2022-05-15T11:24:24.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.ambiance.2-class.exclusive.seed_42 | 1 | null | transformers | 31,913 | Entry not found |
anas-awadalla/splinter-base-finetuned-squad | 6925320dee7391ea2546fe8b7bfe005dd01d497b | 2022-05-15T11:49:58.000Z | [
"pytorch",
"splinter",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/splinter-base-finetuned-squad | 1 | null | transformers | 31,914 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-base-finetuned-squad
This model is a fine-tuned version of [tau/splinter-base-qass](https://huggingface.co/tau/splinter-base-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
TejasARathod/DialoGPT-medium-BatmanBot | 2231afd6f69492e2ea8f2fddfb6b22a6f9075a26 | 2022-05-15T12:14:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | TejasARathod | null | TejasARathod/DialoGPT-medium-BatmanBot | 1 | null | transformers | 31,915 | ---
tags:
- conversational
---
# Batman DialoGPT Model |
ntcuong777/electra-squad-test | 552e6a06947508f390ac440c47c3e0a2e1fd82d5 | 2022-05-15T11:26:27.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ntcuong777 | null | ntcuong777/electra-squad-test | 1 | null | transformers | 31,916 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.food.2-class.exclusive.seed_42 | 8ad90ec7c3ea42342eed50675eef944a0dc8c5e9 | 2022-05-15T11:25:38.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.food.2-class.exclusive.seed_42 | 1 | null | transformers | 31,917 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.service.2-class.exclusive.seed_42 | 2657e655c26c1f8cc693c9edf0be4b0903903188 | 2022-05-15T11:26:48.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.service.2-class.exclusive.seed_42 | 1 | null | transformers | 31,918 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.noise.2-class.exclusive.seed_42 | 9e932a58ed96c4f54506538999ad14ac2e407e29 | 2022-05-15T11:27:58.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.noise.2-class.exclusive.seed_42 | 1 | null | transformers | 31,919 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.ambiance.2-class.exclusive.seed_43 | e6558c70b192e96a569bec803b437241261c4220 | 2022-05-15T11:29:08.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.ambiance.2-class.exclusive.seed_43 | 1 | null | transformers | 31,920 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.food.2-class.exclusive.seed_43 | 6fe2e8556aef8ce4a87371c3e6a76bc24889e20d | 2022-05-15T11:30:18.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.food.2-class.exclusive.seed_43 | 1 | null | transformers | 31,921 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.service.2-class.exclusive.seed_43 | 09d30a0091fd93c6650465f82d527a6ddd93b644 | 2022-05-15T11:31:29.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.service.2-class.exclusive.seed_43 | 1 | null | transformers | 31,922 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.noise.2-class.exclusive.seed_43 | 676d3546ce148731341c43081bd4a490bac8f615 | 2022-05-15T11:32:39.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.noise.2-class.exclusive.seed_43 | 1 | null | transformers | 31,923 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.ambiance.2-class.exclusive.seed_44 | 48a961d96b97a2100a742b32cd4feeb88cf02f98 | 2022-05-15T11:33:49.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.ambiance.2-class.exclusive.seed_44 | 1 | null | transformers | 31,924 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.food.2-class.exclusive.seed_44 | 395d85780319c1ed249d0d643e8b04bcfeb74961 | 2022-05-15T11:35:00.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.food.2-class.exclusive.seed_44 | 1 | null | transformers | 31,925 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.service.2-class.exclusive.seed_44 | 204e6f6c09ac7674fcff1cd3da0288de68049cdd | 2022-05-15T11:36:13.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.service.2-class.exclusive.seed_44 | 1 | null | transformers | 31,926 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.noise.2-class.exclusive.seed_44 | fd4b2ac8a1754a01424373c518bc45308e059342 | 2022-05-15T11:37:24.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.noise.2-class.exclusive.seed_44 | 1 | null | transformers | 31,927 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.ambiance.2-class.exclusive.seed_45 | db3e04a20ea74891203c53fe6ba0a4477a40c359 | 2022-05-15T11:38:35.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.ambiance.2-class.exclusive.seed_45 | 1 | null | transformers | 31,928 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.food.2-class.exclusive.seed_45 | 8a94ab2ff8ff1f24d3d7dc4646884a02298fb960 | 2022-05-15T11:39:46.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.food.2-class.exclusive.seed_45 | 1 | null | transformers | 31,929 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.service.2-class.exclusive.seed_45 | 0045c118b4c7220cbb74f7cc56ea19dcee4e7211 | 2022-05-15T11:40:57.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.service.2-class.exclusive.seed_45 | 1 | null | transformers | 31,930 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.noise.2-class.exclusive.seed_45 | efaf9205cd9940ab0956e58254958ee65ac4c9f7 | 2022-05-15T11:42:08.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.noise.2-class.exclusive.seed_45 | 1 | null | transformers | 31,931 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.ambiance.2-class.exclusive.seed_46 | d812156a2237ccbe4ca7cef5886f0c4117e109dc | 2022-05-15T11:43:18.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.ambiance.2-class.exclusive.seed_46 | 1 | null | transformers | 31,932 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.food.2-class.exclusive.seed_46 | d2fdfdba8d789ef7e7202e71014c2868aba81b07 | 2022-05-15T11:44:28.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.food.2-class.exclusive.seed_46 | 1 | null | transformers | 31,933 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.service.2-class.exclusive.seed_46 | b4a98632de3379dced64f7562d8ff87f99737ce9 | 2022-05-15T11:45:57.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.service.2-class.exclusive.seed_46 | 1 | null | transformers | 31,934 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.noise.2-class.exclusive.seed_46 | e4ef23642446a0f66d73b3410a730fca9dac8b7d | 2022-05-15T11:47:07.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.noise.2-class.exclusive.seed_46 | 1 | null | transformers | 31,935 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.factual.2-class.exclusive.seed_42 | 5e067786e9a23bcbf8e46746176db0f46192b6d3 | 2022-05-15T11:55:35.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.factual.2-class.exclusive.seed_42 | 1 | null | transformers | 31,936 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.factual.2-class.exclusive.seed_43 | e02835d45dd6d1478fbcc80274d1749a8d2d3a4a | 2022-05-15T11:56:43.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.factual.2-class.exclusive.seed_43 | 1 | null | transformers | 31,937 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.factual.2-class.exclusive.seed_44 | 9970533c147eb0e5ebae155f5b874f3415bddf0c | 2022-05-15T11:57:51.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.factual.2-class.exclusive.seed_44 | 1 | null | transformers | 31,938 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.factual.2-class.exclusive.seed_45 | d1da796ff4aacf116ec2692f3ca51e225210db62 | 2022-05-15T11:59:00.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.factual.2-class.exclusive.seed_45 | 1 | null | transformers | 31,939 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.factual.2-class.exclusive.seed_46 | 68c7eac5216771b9513372b760dfd2ba4efc83f7 | 2022-05-15T12:00:07.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.factual.2-class.exclusive.seed_46 | 1 | null | transformers | 31,940 | Entry not found |
loubnabnl/codeparrot-small-scale | 71e9dbdba2f42fc3b273b7f2416b56f6e647d234 | 2022-05-15T14:34:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | loubnabnl | null | loubnabnl/codeparrot-small-scale | 1 | null | transformers | 31,941 | Entry not found |
PSW/cnndm_0.1percent_maxsimdel_seed42 | b33841679fa43b9d391665bafbe4c2fa58ed9324 | 2022-05-15T14:08:18.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_maxsimdel_seed42 | 1 | null | transformers | 31,942 | Entry not found |
pietrolesci/bart-base-mnli | 375c993af1f0d0b87c7d51e7e902284d44687c6f | 2022-05-15T14:06:25.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | pietrolesci | null | pietrolesci/bart-base-mnli | 1 | null | transformers | 31,943 | Entry not found |
PSW/cnndm_0.1percent_randomsimdel_seed27 | e82bdc1684e26c5211dec780c9c4f78dcdefcee9 | 2022-05-15T16:19:50.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_randomsimdel_seed27 | 1 | null | transformers | 31,944 | Entry not found |
nttoanh/t5vi-finetuned-en-to-vi | e04a3b4e83460b26816ca4a4bef4157184fc3623 | 2022-05-15T22:20:38.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:mt_eng_vietnamese",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | nttoanh | null | nttoanh/t5vi-finetuned-en-to-vi | 1 | null | transformers | 31,945 | ---
tags:
- generated_from_trainer
datasets:
- mt_eng_vietnamese
metrics:
- bleu
model-index:
- name: t5vi-finetuned-en-to-vi
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: mt_eng_vietnamese
type: mt_eng_vietnamese
args: iwslt2015-en-vi
metrics:
- name: Bleu
type: bleu
value: 13.547
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5vi-finetuned-en-to-vi
This model is a fine-tuned version of [imthanhlv/t5vi](https://huggingface.co/imthanhlv/t5vi) on the mt_eng_vietnamese dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3827
- Bleu: 13.547
- Gen Len: 17.3719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.8026 | 1.0 | 6666 | 1.5907 | 10.9756 | 17.3231 |
| 1.6217 | 2.0 | 13332 | 1.4635 | 12.375 | 17.3444 |
| 1.5087 | 3.0 | 19998 | 1.4131 | 13.1828 | 17.3924 |
| 1.4446 | 4.0 | 26664 | 1.3915 | 13.5217 | 17.3617 |
| 1.4076 | 5.0 | 33330 | 1.3827 | 13.547 | 17.3719 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
PSW/cnndm_0.1percent_minsimins_seed27 | 737f3c062b39c09cd665efd114fe9d27b3b7ba2b | 2022-05-15T19:39:29.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_minsimins_seed27 | 1 | null | transformers | 31,946 | Entry not found |
CEBaB/t5-base.CEBaB.sa.2-class.inclusive.seed_42 | 2ae50edfb6935d25f1bfad041e24371bf49c0ba2 | 2022-05-15T20:26:08.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.2-class.inclusive.seed_42 | 1 | null | transformers | 31,947 | Entry not found |
CEBaB/t5-base.CEBaB.sa.5-class.inclusive.seed_42 | d702ca952abb957b1c5a8b8064c9d40545c41d30 | 2022-05-15T20:44:32.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.5-class.inclusive.seed_42 | 1 | null | transformers | 31,948 | Entry not found |
CEBaB/t5-base.CEBaB.sa.2-class.inclusive.seed_66 | a6d24235aece4099964cd8c5acfdb9fabab4fa34 | 2022-05-15T20:53:54.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.2-class.inclusive.seed_66 | 1 | null | transformers | 31,949 | Entry not found |
CEBaB/t5-base.CEBaB.sa.3-class.inclusive.seed_66 | 2265dbf64a115c1cb8cba2d5a647dca889ef2aea | 2022-05-15T21:03:12.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.3-class.inclusive.seed_66 | 1 | null | transformers | 31,950 | Entry not found |
CEBaB/t5-base.CEBaB.sa.5-class.inclusive.seed_66 | 708834a8b24af82f1d130b7b7d3470fcd2cf19e5 | 2022-05-15T21:12:25.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.5-class.inclusive.seed_66 | 1 | null | transformers | 31,951 | Entry not found |
CEBaB/t5-base.CEBaB.sa.2-class.inclusive.seed_77 | d3eb22c5529ad50bab8103509ddcbc988439076c | 2022-05-15T21:21:36.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.2-class.inclusive.seed_77 | 1 | null | transformers | 31,952 | Entry not found |
CEBaB/t5-base.CEBaB.sa.3-class.inclusive.seed_77 | 43bf3234fa4d27675e30241f3dae03ee175d8ca1 | 2022-05-15T21:30:55.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.3-class.inclusive.seed_77 | 1 | null | transformers | 31,953 | Entry not found |
CEBaB/t5-base.CEBaB.sa.5-class.inclusive.seed_77 | 977299f296257ba6ad9179ad24f44d9561c7bf9e | 2022-05-15T21:40:07.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.5-class.inclusive.seed_77 | 1 | null | transformers | 31,954 | Entry not found |
CEBaB/t5-base.CEBaB.sa.2-class.inclusive.seed_88 | 9000c58df1ecf1bb7b129544d62b4f5513b9f7c8 | 2022-05-15T21:49:48.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.2-class.inclusive.seed_88 | 1 | null | transformers | 31,955 | Entry not found |
CEBaB/t5-base.CEBaB.sa.3-class.inclusive.seed_88 | 247ee3071f656e415f5dcfe553cbae572697c510 | 2022-05-15T21:59:01.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.3-class.inclusive.seed_88 | 1 | null | transformers | 31,956 | Entry not found |
CEBaB/t5-base.CEBaB.sa.2-class.inclusive.seed_99 | 823ec377ec5d396c68e2a44fb0f529682f8042fd | 2022-05-15T22:19:54.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.2-class.inclusive.seed_99 | 1 | null | transformers | 31,957 | Entry not found |
CEBaB/t5-base.CEBaB.sa.3-class.inclusive.seed_99 | 50cbbc1ee1058470531824ef68b8c67777befe3c | 2022-05-15T22:29:49.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.3-class.inclusive.seed_99 | 1 | null | transformers | 31,958 | Entry not found |
CEBaB/t5-base.CEBaB.sa.5-class.inclusive.seed_99 | f1f48018688f2eedff3545300782d3a929a8167b | 2022-05-15T22:39:05.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.5-class.inclusive.seed_99 | 1 | null | transformers | 31,959 | Entry not found |
CEBaB/t5-base.CEBaB.sa.2-class.exclusive.seed_42 | a29e2783f3d3837a9d2c05b9f91b89f8ea12efbb | 2022-05-15T22:48:20.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.2-class.exclusive.seed_42 | 1 | null | transformers | 31,960 | Entry not found |
CEBaB/t5-base.CEBaB.sa.3-class.exclusive.seed_42 | 52ef6210954c64fb6efff19b2814130dcbe907db | 2022-05-15T22:57:37.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.3-class.exclusive.seed_42 | 1 | null | transformers | 31,961 | Entry not found |
lilitket/20220516-030558 | b95374fd1975321c01f357c36de6b9527f9fa993 | 2022-05-16T00:59:35.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220516-030558 | 1 | null | transformers | 31,962 | Entry not found |
CEBaB/t5-base.CEBaB.sa.2-class.exclusive.seed_66 | 44b3fa951d56ab85988f3fba3e8a86002861a6d2 | 2022-05-15T23:16:11.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.2-class.exclusive.seed_66 | 1 | null | transformers | 31,963 | Entry not found |
CEBaB/t5-base.CEBaB.sa.3-class.exclusive.seed_66 | 9ba847fcdef7ab5a9d587b4ac17f4b1dbecc799c | 2022-05-15T23:25:26.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.3-class.exclusive.seed_66 | 1 | null | transformers | 31,964 | Entry not found |
CEBaB/t5-base.CEBaB.sa.5-class.exclusive.seed_66 | 02b0947ebffefbbeea58a3a5186f576b8da0c0e7 | 2022-05-15T23:34:50.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.5-class.exclusive.seed_66 | 1 | null | transformers | 31,965 | Entry not found |
CEBaB/t5-base.CEBaB.sa.2-class.exclusive.seed_77 | aacb2c02559881dc9f712d202c294a79d488f0a4 | 2022-05-15T23:44:08.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.2-class.exclusive.seed_77 | 1 | null | transformers | 31,966 | Entry not found |
CEBaB/t5-base.CEBaB.sa.3-class.exclusive.seed_77 | f06fb35d7335e22eb3bc4ff37711c189ed4a2139 | 2022-05-15T23:53:19.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.3-class.exclusive.seed_77 | 1 | null | transformers | 31,967 | Entry not found |
PSW/cnndm_0.1percent_maxsimins_seed42 | 2c688701611d69d7c5cbfeef57d6ca6dcdadfcc3 | 2022-05-16T00:05:54.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_maxsimins_seed42 | 1 | null | transformers | 31,968 | Entry not found |
CEBaB/t5-base.CEBaB.sa.5-class.exclusive.seed_77 | c096ecbf3af7b5ad7d3a21e8cac0c98a94c3abec | 2022-05-16T00:02:45.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.5-class.exclusive.seed_77 | 1 | null | transformers | 31,969 | Entry not found |
CEBaB/t5-base.CEBaB.sa.5-class.exclusive.seed_99 | 4a0ac98a32a3b58ad3d32f3c5c511acb7a5b93e0 | 2022-05-16T00:58:58.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.5-class.exclusive.seed_99 | 1 | null | transformers | 31,970 | Entry not found |
CEBaB/t5-base.CEBaB.absa.inclusive.seed_42 | b200d8a635db052058c0fd0d6b64bb0022257b64 | 2022-05-16T01:24:00.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.absa.inclusive.seed_42 | 1 | null | transformers | 31,971 | Entry not found |
CEBaB/t5-base.CEBaB.absa.inclusive.seed_66 | 2f16e943dcf18b200db6bdbcfaf8a810b4a34fae | 2022-05-16T01:33:24.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.absa.inclusive.seed_66 | 1 | null | transformers | 31,972 | Entry not found |
CEBaB/t5-base.CEBaB.absa.inclusive.seed_88 | cc348e8da5dd95b5ca037338f70b71effa45860f | 2022-05-16T01:54:05.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.absa.inclusive.seed_88 | 1 | null | transformers | 31,973 | Entry not found |
CEBaB/t5-base.CEBaB.absa.inclusive.seed_99 | 38cf3a5511c8cbffb16c9367da07cc172a95d03d | 2022-05-16T02:03:36.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.absa.inclusive.seed_99 | 1 | null | transformers | 31,974 | Entry not found |
CEBaB/t5-base.CEBaB.absa.exclusive.seed_42 | fc43b3372799000fa9983f76f3f4add64d425787 | 2022-05-16T02:12:52.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.absa.exclusive.seed_42 | 1 | null | transformers | 31,975 | Entry not found |
CEBaB/t5-base.CEBaB.absa.exclusive.seed_66 | 717eb2c9e797c31195eec11b5e0584401f663a92 | 2022-05-16T02:22:11.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.absa.exclusive.seed_66 | 1 | null | transformers | 31,976 | Entry not found |
CEBaB/t5-base.CEBaB.absa.exclusive.seed_77 | ff00b0f997c89f7f91fea19770176ec00d7895a5 | 2022-05-16T02:31:25.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.absa.exclusive.seed_77 | 1 | null | transformers | 31,977 | Entry not found |
CEBaB/t5-base.CEBaB.absa.exclusive.seed_88 | b31d930ba10fb0ba75a3ba93314f9f50e6ed12b0 | 2022-05-16T02:40:43.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.absa.exclusive.seed_88 | 1 | null | transformers | 31,978 | Entry not found |
CEBaB/t5-base.CEBaB.absa.exclusive.seed_99 | fd1402796c27b769a72c417f13a022c985adfb6c | 2022-05-16T02:50:01.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.absa.exclusive.seed_99 | 1 | null | transformers | 31,979 | Entry not found |
LDD/MLM | 0e039513bc20fa07beb2b030292e57c2a9707e0e | 2022-05-16T05:07:44.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | LDD | null | LDD/MLM | 1 | null | transformers | 31,980 | Entry not found |
PSW/cnndm_0.1percent_minmaxswap_seed27 | fc7660d0ae7dd01644b67fcf4f210dc4e2edd3aa | 2022-05-16T05:35:39.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_minmaxswap_seed27 | 1 | null | transformers | 31,981 | Entry not found |
Yotta/XpCoDir2 | a8b846d72fa3c8cd4dac03cfeaff7069c0303506 | 2022-05-16T08:42:56.000Z | [
"pytorch",
"bert",
"feature-extraction",
"dataset:XpCo",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | feature-extraction | false | Yotta | null | Yotta/XpCoDir2 | 1 | null | transformers | 31,982 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- XpCo
model-index:
- name: XpCoDir2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XpCoDir2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the XpCoDataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0
- Datasets 2.0.0
- Tokenizers 0.10.3
|
mriggs/wikisource_lemmatized_epoch2 | 4208cebf465610afd50fc4ce01197d2f3f196fa3 | 2022-05-16T08:15:38.000Z | [
"pytorch",
"flaubert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mriggs | null | mriggs/wikisource_lemmatized_epoch2 | 1 | null | transformers | 31,983 | Entry not found |
subhasisj/vi-kd-XLM-minilmv2-32 | 92be53966ef09ef7637fea83ae29aad0dd90127f | 2022-05-16T13:12:35.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | subhasisj | null | subhasisj/vi-kd-XLM-minilmv2-32 | 1 | null | transformers | 31,984 | Entry not found |
SreyanG-NVIDIA/distilgpt2-finetuned-wikitext2 | 94932c5d8de6fdd80b9887b2430ccfb943121f39 | 2022-05-16T11:06:40.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | SreyanG-NVIDIA | null | SreyanG-NVIDIA/distilgpt2-finetuned-wikitext2 | 1 | null | transformers | 31,985 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7592 | 1.0 | 2334 | 3.6646 |
| 3.6519 | 2.0 | 4668 | 3.6454 |
| 3.601 | 3.0 | 7002 | 3.6408 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
anes-saidi/aragpt2-base-finetuned-wikitext2 | b1f3ee43bdb3f046c539acc6837113e748fd7ed7 | 2022-05-16T11:14:18.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | anes-saidi | null | anes-saidi/aragpt2-base-finetuned-wikitext2 | 1 | null | transformers | 31,986 | ---
tags:
- generated_from_trainer
model-index:
- name: aragpt2-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aragpt2-base-finetuned-wikitext2
This model is a fine-tuned version of [aubmindlab/aragpt2-base](https://huggingface.co/aubmindlab/aragpt2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 387 | 5.1841 |
| 5.9664 | 2.0 | 774 | 5.0627 |
| 5.4166 | 3.0 | 1161 | 5.0307 |
### Framework versions
- Transformers 4.11.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.10.3
|
SreyanG-NVIDIA/gpt2-wikitext2 | e3cbd490285222074c92a5d30c32e510eb54d1a4 | 2022-05-16T11:44:23.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | SreyanG-NVIDIA | null | SreyanG-NVIDIA/gpt2-wikitext2 | 1 | null | transformers | 31,987 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5573 | 1.0 | 2249 | 6.4633 |
| 6.1893 | 2.0 | 4498 | 6.1993 |
| 6.0153 | 3.0 | 6747 | 6.1085 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
lilitket/20220516-152835 | 3a1234d33fe76aef8f43f18b89ab0b0338556eb2 | 2022-05-16T15:10:40.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220516-152835 | 1 | null | transformers | 31,988 | Entry not found |
Varick/dialo-jarvis | 02f5e35f3580cd8f24f72f7ecd91ba7ff0240b2d | 2022-05-16T13:58:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Varick | null | Varick/dialo-jarvis | 1 | null | transformers | 31,989 | ---
tags:
- conversational
---
# JARVIS DialoGPT Model |
NoYo25/BiodivBERT | fc8b57104a6fc66ac8081e5796ac723e8b4c33cb | 2022-05-16T13:47:38.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"transformers",
"bert-base-cased",
"biodiversity",
"license:cc-by-nc-4.0",
"autotrain_compatible"
] | fill-mask | false | NoYo25 | null | NoYo25/BiodivBERT | 1 | null | transformers | 31,990 | ---
language:
- en
thumbnail: "https://www.fusion.uni-jena.de/fusionmedia/fusionpictures/fusion-service/fusion-transp.png?height=383&width=680"
tags:
- bert-base-cased
- biodiversity
license: cc-by-nc-4.0
---
# BiodivBERT
## Model description
* BiodivBERT is a domain-specific BERT based cased model for the biodiversity literature.
* It uses the tokenizer from BERTT base cased model.
* BiodivBERT is pre-trained on abstracts and full text from biodiversity literature.
* BiodivBERT is fine-tuned on two down stream tasks for Named Entity Recognition and Relation Extraction in the biodiversity domain.
* Please visit our [GitHub Repo](https://github.com/fusion-jena/BiodivBERT) for more details.
## How to use
* You can use BiodivBERT via huggingface library as follows:
1. Masked Language Model
````
>>> from transformers import AutoTokenizer, AutoModelForMaskedLM
>>> tokenizer = AutoTokenizer.from_pretrained("NoYo25/BiodivBERT")
>>> model = AutoModelForMaskedLM.from_pretrained("NoYo25/BiodivBERT")
````
2. Token Classification - Named Entity Recognition
````
>>> from transformers import AutoTokenizer, AutoModelForTokenClassification
>>> tokenizer = AutoTokenizer.from_pretrained("NoYo25/BiodivBERT")
>>> model = AutoModelForTokenClassification.from_pretrained("NoYo25/BiodivBERT")
````
3. Sequence Classification - Relation Extraction
````
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("NoYo25/BiodivBERT")
>>> model = AutoModelForSequenceClassification.from_pretrained("NoYo25/BiodivBERT")
````
## Training data
* BiodivBERT is pre-trained on abstracts and full text from biodiversity domain-related publications.
* We used both Elsevier and Springer APIs to crawl such data.
* We covered publications over the duration of 1990-2020.
## Evaluation results
BiodivBERT overperformed both ``BERT_base_cased``, ``biobert_v1.1``, and ``BiLSTM`` as a baseline approach on the down stream tasks.
## License
license: cc-by-nc-4.0
|
Kontawat/test-model | 755357a39f33bc415e5cc3b1c9c8143965555d26 | 2022-05-16T13:46:28.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Kontawat | null | Kontawat/test-model | 1 | null | transformers | 31,991 | Entry not found |
bartelds/wav2vec2-dutch-large-ft-cgn-3hrs | 32d86c3e581eb33f0989914db2c9a08395c2c7d0 | 2022-05-16T14:58:27.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"transformers",
"speech"
] | automatic-speech-recognition | false | bartelds | null | bartelds/wav2vec2-dutch-large-ft-cgn-3hrs | 1 | null | transformers | 31,992 | ---
language: nl
tags:
- speech
---
# Wav2Vec2-Dutch-Large-ft-CGN-3hrs
A Dutch Wav2Vec2 model. This model is created by fine-tuning [`GroNLP/wav2vec2-dutch-large`](https://huggingface.co/GroNLP/wav2vec2-dutch-large) model on 3 hours of Dutch speech from [Het Corpus Gesproken Nederlands](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/). |
Robinsd/HarryBot | 468b3ed2cf6f3affe233fbd88ce6c1d454eabd3d | 2022-05-16T14:44:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Robinsd | null | Robinsd/HarryBot | 1 | null | transformers | 31,993 | ---
tags:
- conversational
---
#harrypotter |
bartelds/wav2vec2-large-ft-cgn-3hrs | 0cb7f2855081416aced753357cca60025b4b906b | 2022-05-16T14:59:59.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"transformers",
"speech"
] | automatic-speech-recognition | false | bartelds | null | bartelds/wav2vec2-large-ft-cgn-3hrs | 1 | null | transformers | 31,994 | ---
language: nl
tags:
- speech
---
# Wav2Vec2-Large-ft-CGN-3hrs
An English Wav2Vec2 model fine-tuned on Dutch. This model is created by fine-tuning [`facebook/wav2vec2-large`](https://huggingface.co/facebook/wav2vec2-large) model on 3 hours of Dutch speech from [Het Corpus Gesproken Nederlands](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/). |
huawei-noah/AutoTinyBERT-S1 | 4abedd192b5fcf1367436089c344c6b3f7335436 | 2022-05-16T14:47:57.000Z | [
"pytorch",
"transformers",
"license:other"
] | null | false | huawei-noah | null | huawei-noah/AutoTinyBERT-S1 | 1 | null | transformers | 31,995 | ---
license: other
---
Pre-trained language models (PLMs) have achieved great success in natural language processing. Most of PLMs follow the default setting of architecture hyper-parameters (e.g., the hidden dimension is a quarter of the intermediate dimension in feed-forward sub-networks) in BERT. In this paper, we adopt the one-shot Neural Architecture Search (NAS) to automatically search architecture hyper-parameters for efficient pre-trained language models (at least 6x faster than BERT-base).
AutoTinyBERT provides a model zoo that can meet different latency requirements. |
huawei-noah/AutoTinyBERT-S3 | 30ed323b98f18f53457098f171886c5a405a19c6 | 2022-05-16T14:56:13.000Z | [
"pytorch",
"transformers",
"license:other"
] | null | false | huawei-noah | null | huawei-noah/AutoTinyBERT-S3 | 1 | null | transformers | 31,996 | ---
license: other
---
Pre-trained language models (PLMs) have achieved great success in natural language processing. Most of PLMs follow the default setting of architecture hyper-parameters (e.g., the hidden dimension is a quarter of the intermediate dimension in feed-forward sub-networks) in BERT. In this paper, we adopt the one-shot Neural Architecture Search (NAS) to automatically search architecture hyper-parameters for efficient pre-trained language models (at least 6x faster than BERT-base).
AutoTinyBERT provides a model zoo that can meet different latency requirements. |
PSW/cnndm_0.1percent_randomswap_seed27 | e07a9d4fe05041f00acc2ef5418ce7c8600dc219 | 2022-05-16T15:38:12.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_randomswap_seed27 | 1 | null | transformers | 31,997 | Entry not found |
eglesaks/xlm-roberta-base-finetuned-est | 403a0d75147d6b679df89aa7c72d6004e7f10434 | 2022-05-16T18:49:53.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | eglesaks | null | eglesaks/xlm-roberta-base-finetuned-est | 1 | null | transformers | 31,998 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-est
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-est
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 52 | 4.2576 |
| No log | 2.0 | 104 | 3.8075 |
| No log | 3.0 | 156 | 3.6781 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
elisabethvonoswald/wav2vec2-large-xls-r-300m | 00c32c8adffb91579d771ebde921504fb0e442fe | 2022-05-25T14:09:55.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | elisabethvonoswald | null | elisabethvonoswald/wav2vec2-large-xls-r-300m | 1 | null | transformers | 31,999 | Entry not found |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.