modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Mjollnir1996/dpr-question_encoder-bert-base-multilingual_mod | 59f15e6f2917c0c0e3eb88436f42b27304a1448c | 2022-06-27T11:10:20.000Z | [
"pytorch",
"dpr",
"feature-extraction",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | Mjollnir1996 | null | Mjollnir1996/dpr-question_encoder-bert-base-multilingual_mod | 1 | null | transformers | 33,100 | ---
license: apache-2.0
---
|
oceanpty/panx-xlmr-base | bccc0098d384315346368fdb89912f835151ff42 | 2022-06-27T13:10:33.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | oceanpty | null | oceanpty/panx-xlmr-base | 1 | null | transformers | 33,101 | Entry not found |
cookpad/mt5-base-indonesia-recipe-query-generation_v3 | e7328ec68eb864e661419c9981c1b9b1f1c1d270 | 2022-06-27T12:17:22.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | cookpad | null | cookpad/mt5-base-indonesia-recipe-query-generation_v3 | 1 | null | transformers | 33,102 | Entry not found |
Rahulrr/opus-mt-en-ro-finetuned-en-to-ro | 744f6b18f3c88efa99dc5ba9eccc3686cadbf5d8 | 2022-06-27T13:37:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Rahulrr | null | Rahulrr/opus-mt-en-ro-finetuned-en-to-ro | 1 | null | transformers | 33,103 | Entry not found |
gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v5 | 45ad6f6cf903f31cfb73e178e721dd99230d439f | 2022-06-28T11:49:44.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v5 | 1 | null | transformers | 33,104 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v5
This model is a fine-tuned version of [gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v4](https://huggingface.co/gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v4) on the GARY109/AI_LIGHT_DANCE - ONSET-STEPMANIA2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0163
- Wer: 0.6622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8867 | 1.0 | 376 | 1.0382 | 0.6821 |
| 0.8861 | 2.0 | 752 | 1.0260 | 0.6686 |
| 0.8682 | 3.0 | 1128 | 1.0358 | 0.6604 |
| 0.8662 | 4.0 | 1504 | 1.0234 | 0.6665 |
| 0.8463 | 5.0 | 1880 | 1.0333 | 0.6666 |
| 0.8573 | 6.0 | 2256 | 1.0163 | 0.6622 |
| 0.8628 | 7.0 | 2632 | 1.0209 | 0.6551 |
| 0.8493 | 8.0 | 3008 | 1.0525 | 0.6582 |
| 0.8371 | 9.0 | 3384 | 1.0409 | 0.6515 |
| 0.8229 | 10.0 | 3760 | 1.0597 | 0.6523 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
Abdelmageed95/distilgpt2-finetuned-wikitext2 | 026e2598ea2022b34a8a9f2853f718de4b84b8ca | 2022-06-27T22:58:48.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | Abdelmageed95 | null | Abdelmageed95/distilgpt2-finetuned-wikitext2 | 1 | null | transformers | 33,105 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-v2 | 7ff4dbe7ae20841ab31ba4b9453ab6ee5c70c481 | 2022-06-28T14:35:39.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-v2 | 1 | null | transformers | 33,106 | Entry not found |
mmdjiji/bert-chinese-idioms | 793f09944ef164b638a17bcdddd218135bd23801 | 2022-06-28T14:12:31.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:gpl-3.0",
"autotrain_compatible"
] | fill-mask | false | mmdjiji | null | mmdjiji/bert-chinese-idioms | 1 | null | transformers | 33,107 | ---
license: gpl-3.0
---
For the detail, see [github:mmdjiji/bert-chinese-idioms](https://github.com/mmdjiji/bert-chinese-idioms). |
Mindstorm314/AI-Camp-JS | d84f2d84fec964e30c53da5982620e5b69912c3a | 2022-06-28T03:02:54.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | Mindstorm314 | null | Mindstorm314/AI-Camp-JS | 1 | null | transformers | 33,108 | Entry not found |
Monisha/opus-mt-en-de-finetuned-en-to-de | a114dc5fc5e05a53c08641f7e08f539b78ad6d43 | 2022-07-02T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Monisha | null | Monisha/opus-mt-en-de-finetuned-en-to-de | 1 | null | transformers | 33,109 | Entry not found |
gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v6 | 74f0750e11e35899922bf43b493b25b2fc7e6b29 | 2022-06-29T12:06:41.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v6 | 1 | null | transformers | 33,110 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v6
This model is a fine-tuned version of [gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v5](https://huggingface.co/gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v5) on the GARY109/AI_LIGHT_DANCE - ONSET-STEPMANIA2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0063
- Wer: 0.6580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8572 | 1.0 | 376 | 1.0508 | 0.6601 |
| 0.8671 | 2.0 | 752 | 1.0755 | 0.6581 |
| 0.8578 | 3.0 | 1128 | 1.0152 | 0.6787 |
| 0.8552 | 4.0 | 1504 | 1.0537 | 0.6557 |
| 0.8354 | 5.0 | 1880 | 1.0386 | 0.6606 |
| 0.8543 | 6.0 | 2256 | 1.0063 | 0.6580 |
| 0.8556 | 7.0 | 2632 | 1.0487 | 0.6499 |
| 0.8356 | 8.0 | 3008 | 1.0407 | 0.6549 |
| 0.8227 | 9.0 | 3384 | 1.0382 | 0.6506 |
| 0.8148 | 10.0 | 3760 | 1.0440 | 0.6500 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
russellc/wav2vec2-large-xls-r-300m-tr | 418a627bc7de6385c74ad24f1d40780d929ffaa1 | 2022-06-30T11:56:48.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr-TR",
"dataset:common_voice, common_voice_6_1_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | russellc | null | russellc/wav2vec2-large-xls-r-300m-tr | 1 | 1 | transformers | 33,111 | ---
license: apache-2.0
tags:
- generated_from_trainer
- hf-asr-leaderboard
datasets:
- common_voice, common_voice_6_1_0
model-index:
- name: wav2vec2-large-xls-r-300m-tr
results: []
language:
- tr-TR
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2841
- Wer: 0.2904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 7
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 14
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0805 | 4.03 | 1000 | 3.0333 | 1.0 |
| 1.5733 | 8.06 | 2000 | 0.5545 | 0.5080 |
| 0.6238 | 12.1 | 3000 | 0.3861 | 0.3977 |
| 0.4535 | 16.13 | 4000 | 0.3253 | 0.3408 |
| 0.3682 | 20.16 | 5000 | 0.3042 | 0.3177 |
| 0.3302 | 24.19 | 6000 | 0.2950 | 0.3015 |
| 0.2985 | 28.23 | 7000 | 0.2841 | 0.2904 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
wandgibaut/opus-mt-en-de-finetuned-en-to-de | 7a8e91d6a71279f3eff5298a38e1ab4e149b8621 | 2022-06-28T14:56:40.000Z | [
"pytorch",
"marian",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | wandgibaut | null | wandgibaut/opus-mt-en-de-finetuned-en-to-de | 1 | null | transformers | 33,112 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-de-finetuned-en-to-de
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: de-en
metrics:
- name: Bleu
type: bleu
value: 29.4312
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-de-finetuned-en-to-de
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4083
- Bleu: 29.4312
- Gen Len: 24.746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 1.978 | 1.0 | 568611 | 1.4083 | 29.4312 | 24.746 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v3 | e87ffebbae0815299d79afacb750a150390d7949 | 2022-06-29T01:22:31.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v3 | 1 | null | transformers | 33,113 | ---
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v3
This model is a fine-tuned version of [gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-v2](https://huggingface.co/gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-v2) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5265
- Wer: 0.2256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2546 | 1.0 | 280 | 0.6004 | 0.2796 |
| 0.2325 | 2.0 | 560 | 0.6337 | 0.2729 |
| 0.2185 | 3.0 | 840 | 0.5546 | 0.2299 |
| 0.1988 | 4.0 | 1120 | 0.5265 | 0.2256 |
| 0.1755 | 5.0 | 1400 | 0.5577 | 0.2212 |
| 0.1474 | 6.0 | 1680 | 0.6353 | 0.2241 |
| 0.1498 | 7.0 | 1960 | 0.5758 | 0.2086 |
| 0.1252 | 8.0 | 2240 | 0.5738 | 0.2052 |
| 0.1174 | 9.0 | 2520 | 0.5994 | 0.2048 |
| 0.1035 | 10.0 | 2800 | 0.5988 | 0.2038 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
Konbai/DialoGPT-small-akagi2 | 5f131a1f8008be6fedcfc6bce7054c1c3a931c44 | 2022-06-28T21:11:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Konbai | null | Konbai/DialoGPT-small-akagi2 | 1 | null | transformers | 33,114 | ---
tags:
- conversational
---
# Azur Lane DialoGPT Model |
alanwang8/dummy-model1 | 3f85ecf94a5ec018d8f57e494f4617528de617d7 | 2022-06-28T18:40:20.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | alanwang8 | null | alanwang8/dummy-model1 | 1 | null | transformers | 33,115 | Entry not found |
gexai/marvin-optimized-base | 2e9bd6eb0b49a66c6eefafa69156c1bff97c0c73 | 2022-06-28T23:56:11.000Z | [
"pytorch",
"onnx",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | gexai | null | gexai/marvin-optimized-base | 1 | null | transformers | 33,116 | Entry not found |
prodm93/bert-rp-testmodel | 06cc6b2fe302d9a30b408d8bf94b7b0324906cae | 2022-06-29T05:43:34.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | prodm93 | null | prodm93/bert-rp-testmodel | 1 | null | transformers | 33,117 | Entry not found |
YuanWellspring/wav2vec2-nsc-final_2-google-colab | 63fc26ec7683d2661817e716f95d78123ea73f5c | 2022-06-29T03:09:33.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | YuanWellspring | null | YuanWellspring/wav2vec2-nsc-final_2-google-colab | 1 | null | transformers | 33,118 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-nsc-final_2-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-nsc-final_2-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ryo0634/bert-base-log_linear-encoder-en-0 | a771d54e15543ddc24cba7db546ffa88d699006c | 2022-06-29T03:41:55.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ryo0634 | null | ryo0634/bert-base-log_linear-encoder-en-0 | 1 | null | transformers | 33,119 | Entry not found |
ryo0634/bert-base-log_linear-dependency-encoder-en-0 | 76cde8e8ec4aa1c4e5a279f0952c7f4133ac7ca2 | 2022-06-29T03:43:11.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ryo0634 | null | ryo0634/bert-base-log_linear-dependency-encoder-en-0 | 1 | null | transformers | 33,120 | Entry not found |
Nancyzzz/wav2vec2-base-timit-demo-google-colab | eb85ca757119ec3e09578676f38fff21e74ecea3 | 2022-06-29T11:15:59.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Nancyzzz | null | Nancyzzz/wav2vec2-base-timit-demo-google-colab | 1 | null | transformers | 33,121 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5253
- Wer: 0.3406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.4884 | 1.0 | 500 | 1.6139 | 1.0293 |
| 0.8373 | 2.01 | 1000 | 0.5286 | 0.5266 |
| 0.4394 | 3.01 | 1500 | 0.4933 | 0.4678 |
| 0.2974 | 4.02 | 2000 | 0.4159 | 0.4268 |
| 0.2268 | 5.02 | 2500 | 0.4288 | 0.4074 |
| 0.1901 | 6.02 | 3000 | 0.4407 | 0.3852 |
| 0.1627 | 7.03 | 3500 | 0.4599 | 0.3849 |
| 0.1397 | 8.03 | 4000 | 0.4330 | 0.3803 |
| 0.1342 | 9.04 | 4500 | 0.4661 | 0.3785 |
| 0.1165 | 10.04 | 5000 | 0.4518 | 0.3745 |
| 0.1 | 11.04 | 5500 | 0.4714 | 0.3899 |
| 0.0881 | 12.05 | 6000 | 0.4985 | 0.3848 |
| 0.0794 | 13.05 | 6500 | 0.5074 | 0.3672 |
| 0.0707 | 14.06 | 7000 | 0.5692 | 0.3681 |
| 0.0669 | 15.06 | 7500 | 0.4722 | 0.3814 |
| 0.0589 | 16.06 | 8000 | 0.5738 | 0.3784 |
| 0.0562 | 17.07 | 8500 | 0.5183 | 0.3696 |
| 0.0578 | 18.07 | 9000 | 0.5473 | 0.3841 |
| 0.0473 | 19.08 | 9500 | 0.4918 | 0.3655 |
| 0.0411 | 20.08 | 10000 | 0.5258 | 0.3517 |
| 0.0419 | 21.08 | 10500 | 0.5256 | 0.3501 |
| 0.0348 | 22.09 | 11000 | 0.5511 | 0.3597 |
| 0.0328 | 23.09 | 11500 | 0.5054 | 0.3560 |
| 0.0314 | 24.1 | 12000 | 0.5327 | 0.3537 |
| 0.0296 | 25.1 | 12500 | 0.5142 | 0.3446 |
| 0.0251 | 26.1 | 13000 | 0.5155 | 0.3411 |
| 0.0249 | 27.11 | 13500 | 0.5344 | 0.3414 |
| 0.0225 | 28.11 | 14000 | 0.5193 | 0.3408 |
| 0.0226 | 29.12 | 14500 | 0.5253 | 0.3406 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
ashutoshyadav4/distilbert-base-uncased-finetuned-squad | 8c0c74f4de74c24696ab553662f40bf87d7aba96 | 2022-06-29T10:36:49.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | ashutoshyadav4 | null | ashutoshyadav4/distilbert-base-uncased-finetuned-squad | 1 | null | transformers | 33,122 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
roshnir/xlmr-finetuned-mlqa-dev-es-zh-hi | 924e2db0bc45dc8d6e22230831b68f17da3fedd6 | 2022-06-29T11:00:37.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | roshnir | null | roshnir/xlmr-finetuned-mlqa-dev-es-zh-hi | 1 | null | transformers | 33,123 | Entry not found |
roshnir/mBert-finetuned-mlqa-dev-es-zh-hi | 4c78e96633364928c52916ed2d939055d3527986 | 2022-06-29T11:39:23.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | roshnir | null | roshnir/mBert-finetuned-mlqa-dev-es-zh-hi | 1 | null | transformers | 33,124 | Entry not found |
ones/wav2vec2-base-timit-demo-google-colab | 7103a7a2785a25554d5ac03f04fc5785edaeb0de | 2022-06-30T20:46:39.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ones | null | ones/wav2vec2-base-timit-demo-google-colab | 1 | null | transformers | 33,125 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5112
- Wer: 0.9988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5557 | 1.0 | 500 | 1.6786 | 1.0 |
| 0.8407 | 2.01 | 1000 | 0.5356 | 0.9988 |
| 0.4297 | 3.01 | 1500 | 0.4431 | 0.9988 |
| 0.2989 | 4.02 | 2000 | 0.4191 | 0.9988 |
| 0.2338 | 5.02 | 2500 | 0.4251 | 0.9988 |
| 0.1993 | 6.02 | 3000 | 0.4618 | 0.9988 |
| 0.1585 | 7.03 | 3500 | 0.4577 | 0.9988 |
| 0.1386 | 8.03 | 4000 | 0.4099 | 0.9982 |
| 0.1234 | 9.04 | 4500 | 0.4945 | 0.9988 |
| 0.1162 | 10.04 | 5000 | 0.4597 | 0.9988 |
| 0.1008 | 11.04 | 5500 | 0.4563 | 0.9988 |
| 0.0894 | 12.05 | 6000 | 0.5157 | 0.9988 |
| 0.083 | 13.05 | 6500 | 0.5027 | 0.9988 |
| 0.0735 | 14.06 | 7000 | 0.4905 | 0.9994 |
| 0.0686 | 15.06 | 7500 | 0.4552 | 0.9988 |
| 0.0632 | 16.06 | 8000 | 0.5522 | 0.9988 |
| 0.061 | 17.07 | 8500 | 0.4874 | 0.9988 |
| 0.0626 | 18.07 | 9000 | 0.5243 | 0.9988 |
| 0.0475 | 19.08 | 9500 | 0.4798 | 0.9988 |
| 0.0447 | 20.08 | 10000 | 0.5250 | 0.9988 |
| 0.0432 | 21.08 | 10500 | 0.5195 | 0.9988 |
| 0.0358 | 22.09 | 11000 | 0.5008 | 0.9988 |
| 0.0319 | 23.09 | 11500 | 0.5376 | 0.9988 |
| 0.0334 | 24.1 | 12000 | 0.5149 | 0.9988 |
| 0.0269 | 25.1 | 12500 | 0.4911 | 0.9988 |
| 0.0275 | 26.1 | 13000 | 0.4907 | 0.9988 |
| 0.027 | 27.11 | 13500 | 0.4992 | 0.9988 |
| 0.0239 | 28.11 | 14000 | 0.5021 | 0.9988 |
| 0.0233 | 29.12 | 14500 | 0.5112 | 0.9988 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
jimypbr/cifar10_outputs | 0f4baa399fae84642beac569a1490163d5eafa42 | 2022-06-29T14:48:46.000Z | [
"pytorch",
"tensorboard",
"vit",
"dataset:cifar10",
"transformers",
"image-classification",
"vision",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | jimypbr | null | jimypbr/cifar10_outputs | 1 | null | transformers | 33,126 | ---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- cifar10
metrics:
- accuracy
model-index:
- name: cifar10_outputs
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: cifar10
type: cifar10
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.991421568627451
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cifar10_outputs
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0806
- Accuracy: 0.9914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 17
- eval_batch_size: 17
- seed: 1337
- distributed_type: IPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 8704
- total_eval_batch_size: 272
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.25
- num_epochs: 100.0
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cpu
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
elhamagk/distilbert-base-uncased-finetuned-imdb-accelerate | 64f3d9d83ac0f3e4a4bf2e2d9f2ecf3baccbcc1c | 2022-06-29T15:18:07.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | elhamagk | null | elhamagk/distilbert-base-uncased-finetuned-imdb-accelerate | 1 | null | transformers | 33,127 | Entry not found |
freedomking/prompt-uie-medical-base | 3ebb5ae870713af7595ac1941d21379af2c87f78 | 2022-06-29T16:47:06.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | freedomking | null | freedomking/prompt-uie-medical-base | 1 | null | transformers | 33,128 | ## Introduction
Universal Information Extraction
More detail:
https://github.com/PaddlePaddle/PaddleNLP/tree/develop/model_zoo/uie
|
gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v7 | d4547d8e82ba5166c265508a58ba477a214196b2 | 2022-07-03T06:03:56.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v7 | 1 | null | transformers | 33,129 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v7
This model is a fine-tuned version of [gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v6](https://huggingface.co/gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v6) on the GARY109/AI_LIGHT_DANCE - ONSET-STEPMANIA2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0424
- Wer: 0.6512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| 0.9303 | 1.0 | 12031 | 1.1160 | 0.6712 |
| 0.8181 | 2.0 | 24062 | 1.0601 | 0.6608 |
| 0.7861 | 3.0 | 36093 | 1.0478 | 0.6520 |
| 0.767 | 4.0 | 48124 | 1.0617 | 0.6526 |
| 0.797 | 5.0 | 60155 | 1.0424 | 0.6512 |
| 0.834 | 6.0 | 72186 | 1.0519 | 0.6542 |
| 0.7915 | 7.0 | 84217 | 1.0508 | 0.6494 |
| 0.8106 | 8.0 | 96248 | 1.0753 | 0.6449 |
| 0.7512 | 9.0 | 108279 | 1.1223 | 0.6592 |
| 0.777 | 10.0 | 120310 | 1.1201 | 0.6535 |
| 0.7631 | 11.0 | 132341 | 1.0780 | 0.6512 |
| 0.7465 | 12.0 | 144372 | 1.0822 | 0.6499 |
| 0.826 | 13.0 | 156403 | 1.0706 | 0.6445 |
| 0.7552 | 14.0 | 168434 | 1.0862 | 0.6449 |
| 0.8279 | 15.0 | 180465 | 1.1162 | 0.6461 |
| 0.7769 | 16.0 | 192496 | 1.1023 | 0.6420 |
| 0.7918 | 17.0 | 204527 | 1.1085 | 0.6456 |
| 0.6941 | 18.0 | 216558 | 1.1139 | 0.6417 |
| 0.7379 | 19.0 | 228589 | 1.1126 | 0.6410 |
| 0.7467 | 20.0 | 240620 | 1.1102 | 0.6369 |
| 0.8045 | 21.0 | 252651 | 1.1191 | 0.6376 |
| 0.7059 | 22.0 | 264682 | 1.1285 | 0.6381 |
| 0.7008 | 23.0 | 276713 | 1.1328 | 0.6377 |
| 0.7816 | 24.0 | 288744 | 1.1326 | 0.6366 |
| 0.7426 | 25.0 | 300775 | 1.1420 | 0.6362 |
| 0.7226 | 26.0 | 312806 | 1.1326 | 0.6350 |
| 0.665 | 27.0 | 324837 | 1.1419 | 0.6346 |
| 0.7184 | 28.0 | 336868 | 1.1480 | 0.6346 |
| 0.77 | 29.0 | 348899 | 1.1476 | 0.6343 |
| 0.727 | 30.0 | 360930 | 1.1494 | 0.6348 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
jdang/distilbert-base-uncased-finetuned-imdb | 829683aeeb45f90fd74fc18041dcb728ec12847d | 2022-06-30T01:56:51.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | jdang | null | jdang/distilbert-base-uncased-finetuned-imdb | 1 | null | transformers | 33,130 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jdang/distilbert-base-uncased-finetuned-imdb-accelerate | e803773eed5dfec854f48abfd5c3b5156fd4277c | 2022-06-30T02:10:46.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jdang | null | jdang/distilbert-base-uncased-finetuned-imdb-accelerate | 1 | null | transformers | 33,131 | Entry not found |
omunkhuush/dlub-2022-mlm-full | 11bd1018028ff7fa884e03a11148776b463f50df | 2022-06-30T04:07:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | omunkhuush | null | omunkhuush/dlub-2022-mlm-full | 1 | null | transformers | 33,132 | Entry not found |
ganzorig/dlub-2022-mlm-full | a4be9fed09c8ecb11f22e92e10d8e08dedde633f | 2022-06-30T05:31:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ganzorig | null | ganzorig/dlub-2022-mlm-full | 1 | null | transformers | 33,133 | Entry not found |
sumitrsch/muril_base_multiconer22_bn | 64f89dd42ef514c9418946a3cf7952616643253c | 2022-07-06T12:33:20.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | token-classification | false | sumitrsch | null | sumitrsch/muril_base_multiconer22_bn | 1 | 2 | transformers | 33,134 | ---
license: afl-3.0
---
Put this model path in variable best_model_path in first cell of given colab notebook for testing semeval multiconer task for bangla track.
https://colab.research.google.com/drive/1P9827acdS7i6eZTi4B0cOms5qLREqvUO |
pannaga/wav2vec2-base-timit-demo-google-colab | a5da1b53b86f412c6e015e1a1708f1641eb4fa5c | 2022-07-20T12:20:01.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | pannaga | null | pannaga/wav2vec2-base-timit-demo-google-colab | 1 | null | transformers | 33,135 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5480
- Wer: 0.3437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5237 | 1.0 | 500 | 1.7277 | 0.9752 |
| 0.8339 | 2.01 | 1000 | 0.5413 | 0.5316 |
| 0.4277 | 3.01 | 1500 | 0.4732 | 0.4754 |
| 0.2907 | 4.02 | 2000 | 0.4571 | 0.4476 |
| 0.2254 | 5.02 | 2500 | 0.4611 | 0.4105 |
| 0.1911 | 6.02 | 3000 | 0.4448 | 0.4072 |
| 0.1595 | 7.03 | 3500 | 0.4517 | 0.3843 |
| 0.1377 | 8.03 | 4000 | 0.4551 | 0.3881 |
| 0.1197 | 9.04 | 4500 | 0.4853 | 0.3772 |
| 0.1049 | 10.04 | 5000 | 0.4617 | 0.3707 |
| 0.097 | 11.04 | 5500 | 0.4633 | 0.3622 |
| 0.0872 | 12.05 | 6000 | 0.4635 | 0.3690 |
| 0.0797 | 13.05 | 6500 | 0.5196 | 0.3749 |
| 0.0731 | 14.06 | 7000 | 0.5029 | 0.3639 |
| 0.0667 | 15.06 | 7500 | 0.5053 | 0.3614 |
| 0.0618 | 16.06 | 8000 | 0.5627 | 0.3638 |
| 0.0562 | 17.07 | 8500 | 0.5484 | 0.3577 |
| 0.0567 | 18.07 | 9000 | 0.5163 | 0.3560 |
| 0.0452 | 19.08 | 9500 | 0.5012 | 0.3538 |
| 0.044 | 20.08 | 10000 | 0.4931 | 0.3534 |
| 0.0424 | 21.08 | 10500 | 0.5147 | 0.3519 |
| 0.0356 | 22.09 | 11000 | 0.5540 | 0.3521 |
| 0.0322 | 23.09 | 11500 | 0.5565 | 0.3509 |
| 0.0333 | 24.1 | 12000 | 0.5315 | 0.3428 |
| 0.0281 | 25.1 | 12500 | 0.5284 | 0.3425 |
| 0.0261 | 26.1 | 13000 | 0.5101 | 0.3446 |
| 0.0256 | 27.11 | 13500 | 0.5432 | 0.3415 |
| 0.0229 | 28.11 | 14000 | 0.5484 | 0.3446 |
| 0.0212 | 29.12 | 14500 | 0.5480 | 0.3437 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
haesun/xlm-roberta-base-finetuned-panx-de | a8e11ff8aad8d43baf829b1e0396ed33d0bf0c70 | 2022-07-05T00:00:02.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | haesun | null | haesun/xlm-roberta-base-finetuned-panx-de | 1 | null | transformers | 33,136 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8611443210930829
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1405
- F1: 0.8611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2542 | 1.0 | 787 | 0.1788 | 0.8083 |
| 0.1307 | 2.0 | 1574 | 0.1371 | 0.8488 |
| 0.0784 | 3.0 | 2361 | 0.1405 | 0.8611 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/codyko-thenoelmiller | d667802323388ffc528e75a72bec14d83b2ef4b3 | 2022-06-30T17:40:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/codyko-thenoelmiller | 1 | null | transformers | 33,137 | ---
language: en
thumbnail: http://www.huggingtweets.com/codyko-thenoelmiller/1656610826736/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1438687954285707265/aEtAZlbY_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1438687880101212170/nNi2oamd_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">codyko & Noel Miller</div>
<div style="text-align: center; font-size: 14px;">@codyko-thenoelmiller</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from codyko & Noel Miller.
| Data | codyko | Noel Miller |
| --- | --- | --- |
| Tweets downloaded | 3184 | 3215 |
| Retweets | 604 | 316 |
| Short tweets | 762 | 712 |
| Tweets kept | 1818 | 2187 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2gyf1npk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @codyko-thenoelmiller's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/31mulsnt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/31mulsnt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/codyko-thenoelmiller')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tmoodley/rare-puppers | 5ab324c6658247344dff036ea9af34925199aa94 | 2022-06-30T19:11:33.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | tmoodley | null | tmoodley/rare-puppers | 1 | null | transformers | 33,138 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu
 |
luffycodes/t5_base_v52 | 898eac801dcc041c1c1cf35e36a6a23cce0950b7 | 2022-06-30T20:18:43.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | luffycodes | null | luffycodes/t5_base_v52 | 1 | null | transformers | 33,139 | Entry not found |
huggingtweets/enusec-lewisnwatson | fd9fe601e53567f1fc22e6665a79c6e56d971be7 | 2022-06-30T20:44:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/enusec-lewisnwatson | 1 | null | transformers | 33,140 | ---
language: en
thumbnail: http://www.huggingtweets.com/enusec-lewisnwatson/1656621875256/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1433787116471869441/tk0vXZJb_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1509825675821301790/FCFan5I-_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Edinburgh Napier University Security Society & Lewis N Watson 🇺🇦</div>
<div style="text-align: center; font-size: 14px;">@enusec-lewisnwatson</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Edinburgh Napier University Security Society & Lewis N Watson 🇺🇦.
| Data | Edinburgh Napier University Security Society | Lewis N Watson 🇺🇦 |
| --- | --- | --- |
| Tweets downloaded | 1716 | 1711 |
| Retweets | 554 | 797 |
| Short tweets | 93 | 211 |
| Tweets kept | 1069 | 703 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/32zvb9ky/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @enusec-lewisnwatson's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2a516nqq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2a516nqq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/enusec-lewisnwatson')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
luffycodes/t5_base_v1 | 18d58ba7e09d602cb0fcc8195de2041fed00fdfe | 2022-06-30T21:14:01.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | luffycodes | null | luffycodes/t5_base_v1 | 1 | null | transformers | 33,141 | Entry not found |
prodm93/bert-rp-sent-testmodel-grp | a07a1762d7406440aef0b5ddd597c410931c90e5 | 2022-07-01T03:40:30.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | prodm93 | null | prodm93/bert-rp-sent-testmodel-grp | 1 | null | transformers | 33,142 | Entry not found |
shimdx/wav2vec2-base-demo-sagemaker | 996e4715cf4665447d7fcc2354b20c1ef177d0f0 | 2022-07-02T01:03:17.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | shimdx | null | shimdx/wav2vec2-base-demo-sagemaker | 1 | null | transformers | 33,143 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-demo-sagemaker
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-demo-sagemaker
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4713
- Wer: 0.3381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4274 | 4.0 | 500 | 1.2279 | 0.8902 |
| 0.5778 | 8.0 | 1000 | 0.4838 | 0.4488 |
| 0.2244 | 12.0 | 1500 | 0.4813 | 0.3793 |
| 0.1299 | 16.0 | 2000 | 0.4878 | 0.3714 |
| 0.0871 | 20.0 | 2500 | 0.4796 | 0.3539 |
| 0.0635 | 24.0 | 3000 | 0.4554 | 0.3427 |
| 0.0495 | 28.0 | 3500 | 0.4713 | 0.3381 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0
- Datasets 1.14.0
- Tokenizers 0.10.3
|
scaccomatto/autotrain-120-0-1067937173 | 2c2144fd9b4cc35efb5e8b72e7724c69c0ec9698 | 2022-07-01T09:09:50.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"en",
"dataset:scaccomatto/autotrain-data-120-0",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | scaccomatto | null | scaccomatto/autotrain-120-0-1067937173 | 1 | null | transformers | 33,144 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- scaccomatto/autotrain-data-120-0
co2_eq_emissions: 0.08625442844190523
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1067937173
- CO2 Emissions (in grams): 0.08625442844190523
## Validation Metrics
- Loss: 0.502437174320221
- Rouge1: 83.7457
- Rouge2: 81.1714
- RougeL: 83.2649
- RougeLsum: 83.3018
- Gen Len: 78.7059
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/scaccomatto/autotrain-120-0-1067937173
``` |
huggingtweets/tacticalmaid-the_ironsheik | 41e8eae20c89e8b4d7efd6adf10de5a410a0dc11 | 2022-07-01T09:41:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/tacticalmaid-the_ironsheik | 1 | null | transformers | 33,145 | ---
language: en
thumbnail: http://www.huggingtweets.com/tacticalmaid-the_ironsheik/1656668488177/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1320863459953750016/NlmHwu3b_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1498996796093509632/Z7VwFzOJ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Iron Sheik & Maid POLadin 🎪 💙💛</div>
<div style="text-align: center; font-size: 14px;">@tacticalmaid-the_ironsheik</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from The Iron Sheik & Maid POLadin 🎪 💙💛.
| Data | The Iron Sheik | Maid POLadin 🎪 💙💛 |
| --- | --- | --- |
| Tweets downloaded | 3249 | 3225 |
| Retweets | 287 | 2083 |
| Short tweets | 253 | 291 |
| Tweets kept | 2709 | 851 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/27tu2deb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tacticalmaid-the_ironsheik's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34aavvcw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34aavvcw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tacticalmaid-the_ironsheik')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/the_ironsheik | 991cd3e685f15698e9be3c64e48c09ed88a7fbdc | 2022-07-01T10:13:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/the_ironsheik | 1 | null | transformers | 33,146 | ---
language: en
thumbnail: http://www.huggingtweets.com/the_ironsheik/1656670410014/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1320863459953750016/NlmHwu3b_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Iron Sheik</div>
<div style="text-align: center; font-size: 14px;">@the_ironsheik</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from The Iron Sheik.
| Data | The Iron Sheik |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 287 |
| Short tweets | 253 |
| Tweets kept | 2709 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ti6ikrg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @the_ironsheik's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2segcek8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2segcek8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/the_ironsheik')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
raedinkhaled/deit-base-mri | 2c21709e3e09a34012bcd60c43f09c67a83b9a89 | 2022-07-02T00:09:31.000Z | [
"pytorch",
"tensorboard",
"deit",
"image-classification",
"dataset:imagefolder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | raedinkhaled | null | raedinkhaled/deit-base-mri | 1 | null | transformers | 33,147 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: deit-base-mri
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: mriDataSet
type: imagefolder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9900709219858156
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-base-mri
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the mriDataSet dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0657
- Accuracy: 0.9901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0107 | 0.8 | 500 | 0.0782 | 0.9887 |
| 0.0065 | 1.6 | 1000 | 0.0657 | 0.9901 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gaunernst/bert-L2-H512-uncased | fad174cf14c2b881a0942ea1486ab4b79f5fab58 | 2022-07-02T08:11:37.000Z | [
"pytorch",
"bert",
"transformers",
"license:apache-2.0"
] | null | false | gaunernst | null | gaunernst/bert-L2-H512-uncased | 1 | null | transformers | 33,148 | ---
license: apache-2.0
---
|
gaunernst/bert-L2-H768-uncased | b103adc60de61482cb605294fae302c846e54cbb | 2022-07-02T08:13:39.000Z | [
"pytorch",
"bert",
"transformers",
"license:apache-2.0"
] | null | false | gaunernst | null | gaunernst/bert-L2-H768-uncased | 1 | null | transformers | 33,149 | ---
license: apache-2.0
---
|
gaunernst/bert-L4-H128-uncased | 01db5bd3387202329294939a31a9ff61d766de46 | 2022-07-02T08:16:46.000Z | [
"pytorch",
"bert",
"transformers",
"license:apache-2.0"
] | null | false | gaunernst | null | gaunernst/bert-L4-H128-uncased | 1 | null | transformers | 33,150 | ---
license: apache-2.0
---
|
gaunernst/bert-L4-H768-uncased | 357fd40f63591226e74f2ba8c8be25aa01445ad6 | 2022-07-02T08:17:33.000Z | [
"pytorch",
"bert",
"transformers",
"license:apache-2.0"
] | null | false | gaunernst | null | gaunernst/bert-L4-H768-uncased | 1 | null | transformers | 33,151 | ---
license: apache-2.0
---
|
gaunernst/bert-L6-H256-uncased | e8311a3dd339d61879567c16516bd4d9329b3e3b | 2022-07-02T08:22:27.000Z | [
"pytorch",
"bert",
"transformers",
"license:apache-2.0"
] | null | false | gaunernst | null | gaunernst/bert-L6-H256-uncased | 1 | null | transformers | 33,152 | ---
license: apache-2.0
---
|
gaunernst/bert-L6-H512-uncased | c7bdae9c0700380b8a3681c970c3e94911f58f8f | 2022-07-02T08:23:32.000Z | [
"pytorch",
"bert",
"transformers",
"license:apache-2.0"
] | null | false | gaunernst | null | gaunernst/bert-L6-H512-uncased | 1 | null | transformers | 33,153 | ---
license: apache-2.0
---
|
gaunernst/bert-L8-H128-uncased | 887356755faf6b547e9bae46a5e77419900faddd | 2022-07-02T08:32:44.000Z | [
"pytorch",
"bert",
"transformers",
"license:apache-2.0"
] | null | false | gaunernst | null | gaunernst/bert-L8-H128-uncased | 1 | null | transformers | 33,154 | ---
license: apache-2.0
---
|
gaunernst/bert-L8-H256-uncased | a6a5c5a61ebf3ff689adf5a01d25718953563683 | 2022-07-02T08:33:36.000Z | [
"pytorch",
"bert",
"transformers",
"license:apache-2.0"
] | null | false | gaunernst | null | gaunernst/bert-L8-H256-uncased | 1 | null | transformers | 33,155 | ---
license: apache-2.0
---
|
gaunernst/bert-L8-H768-uncased | ea8f515ca31bb6fd0409b04993caf1b6c67b2b96 | 2022-07-02T08:35:18.000Z | [
"pytorch",
"bert",
"transformers",
"license:apache-2.0"
] | null | false | gaunernst | null | gaunernst/bert-L8-H768-uncased | 1 | null | transformers | 33,156 | ---
license: apache-2.0
---
|
gaunernst/bert-L10-H512-uncased | fcfdfa712878e8451d3c71e89a28dc0b7e0f7d12 | 2022-07-02T08:44:01.000Z | [
"pytorch",
"bert",
"transformers",
"license:apache-2.0"
] | null | false | gaunernst | null | gaunernst/bert-L10-H512-uncased | 1 | null | transformers | 33,157 | ---
license: apache-2.0
---
|
gaunernst/bert-L12-H128-uncased | af7b2f8bb8e44b080da89b062736b5ba8e3c8530 | 2022-07-02T08:53:24.000Z | [
"pytorch",
"bert",
"transformers",
"license:apache-2.0"
] | null | false | gaunernst | null | gaunernst/bert-L12-H128-uncased | 1 | null | transformers | 33,158 | ---
license: apache-2.0
---
|
solve/wav2vec2-base-timit-demo-sol | 8001864be79f556340f81266e50bae8c76b12d0e | 2022-07-16T19:27:06.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | solve | null | solve/wav2vec2-base-timit-demo-sol | 1 | null | transformers | 33,159 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-sol
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-sol
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3922
- Wer: 0.2862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6222 | 6.85 | 500 | 1.5843 | 0.9627 |
| 0.509 | 13.7 | 1000 | 0.4149 | 0.3417 |
| 0.1221 | 20.55 | 1500 | 0.3692 | 0.2992 |
| 0.0618 | 27.4 | 2000 | 0.3922 | 0.2862 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.12.1
|
zoha/wav2vec2-xlsr-persian-50p | e780f1e94a4b181fc88fef263281996c491cd60f | 2022-07-03T01:24:54.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | zoha | null | zoha/wav2vec2-xlsr-persian-50p | 1 | null | transformers | 33,160 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xlsr-persian-50p
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-persian-50p
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6846
- Wer: 0.4339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.05 | 250 | 3.2104 | 1.0 |
| 3.2437 | 2.11 | 500 | 2.9131 | 1.0 |
| 3.2437 | 3.16 | 750 | 1.0335 | 0.7303 |
| 1.4382 | 4.22 | 1000 | 0.8335 | 0.6155 |
| 1.4382 | 5.27 | 1250 | 0.7640 | 0.5904 |
| 0.6923 | 6.33 | 1500 | 0.6923 | 0.5468 |
| 0.6923 | 7.38 | 1750 | 0.6627 | 0.5238 |
| 0.5137 | 8.44 | 2000 | 0.6606 | 0.5112 |
| 0.5137 | 9.49 | 2250 | 0.6600 | 0.5125 |
| 0.4258 | 10.55 | 2500 | 0.6337 | 0.4939 |
| 0.4258 | 11.6 | 2750 | 0.6454 | 0.4851 |
| 0.362 | 12.66 | 3000 | 0.6481 | 0.4793 |
| 0.362 | 13.71 | 3250 | 0.6487 | 0.4801 |
| 0.3179 | 14.77 | 3500 | 0.6602 | 0.4668 |
| 0.3179 | 15.82 | 3750 | 0.6757 | 0.4683 |
| 0.2861 | 16.88 | 4000 | 0.6544 | 0.4591 |
| 0.2861 | 17.93 | 4250 | 0.6659 | 0.4634 |
| 0.2529 | 18.99 | 4500 | 0.6311 | 0.4556 |
| 0.2529 | 20.04 | 4750 | 0.6574 | 0.4525 |
| 0.235 | 21.1 | 5000 | 0.7019 | 0.4462 |
| 0.235 | 22.15 | 5250 | 0.6783 | 0.4426 |
| 0.2203 | 23.21 | 5500 | 0.6789 | 0.4361 |
| 0.2203 | 24.26 | 5750 | 0.6779 | 0.4336 |
| 0.2014 | 25.32 | 6000 | 0.6805 | 0.4406 |
| 0.2014 | 26.37 | 6250 | 0.6918 | 0.4407 |
| 0.1957 | 27.43 | 6500 | 0.6919 | 0.4360 |
| 0.1957 | 28.48 | 6750 | 0.6795 | 0.4332 |
| 0.1837 | 29.53 | 7000 | 0.6846 | 0.4339 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
tner/roberta-large-tweetner-selflabel2021 | 2c8f2b236c29ffe1f580b49d17c07775414df317 | 2022-07-02T19:14:44.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/roberta-large-tweetner-selflabel2021 | 1 | null | transformers | 33,161 | Entry not found |
gciaffoni/wav2vec2-large-xls-r-300m-it-colab6-with-LM-Ref | 164b607fdd02c99c0f6cf12b30dcca870c9fb1a7 | 2022-07-03T01:31:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | gciaffoni | null | gciaffoni/wav2vec2-large-xls-r-300m-it-colab6-with-LM-Ref | 1 | null | transformers | 33,162 | ---
license: apache-2.0
---
|
pablocosta/bert-tweet-br-large | 54e2a7d886bed1528158d9583add8b664454a688 | 2022-07-03T13:40:01.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:gpl-3.0",
"autotrain_compatible"
] | fill-mask | false | pablocosta | null | pablocosta/bert-tweet-br-large | 1 | 1 | transformers | 33,163 | ---
license: gpl-3.0
---
|
tner/roberta-base-tweetner-2020-2021-continuous | 7bce2519554c53bf04e50f4a222eb8c578020a2e | 2022-07-11T22:28:02.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/roberta-base-tweetner-2020-2021-continuous | 1 | null | transformers | 33,164 | Entry not found |
ryo0634/luke-base-full-20181220 | a684b74b35df93055cdc2c5351d35929d1d52f32 | 2022-07-03T16:17:32.000Z | [
"pytorch",
"luke",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ryo0634 | null | ryo0634/luke-base-full-20181220 | 1 | null | transformers | 33,165 | Entry not found |
BBarbarestani/RoBERTa_HateXplain_Target_Span_Detection_UQS_Threshold_50 | 4d6416fb420691d1374222324f8eecc6304766de | 2022-07-03T17:28:03.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | BBarbarestani | null | BBarbarestani/RoBERTa_HateXplain_Target_Span_Detection_UQS_Threshold_50 | 1 | null | transformers | 33,166 | Entry not found |
BBarbarestani/RoBERTa_HateXplain_Target_Span_Detection_UQS_Threshold_60 | 1a971ce0fbf60f5a0065c15493abb65dcabb9514 | 2022-07-03T18:58:43.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | BBarbarestani | null | BBarbarestani/RoBERTa_HateXplain_Target_Span_Detection_UQS_Threshold_60 | 1 | null | transformers | 33,167 | Entry not found |
xliu128/xlm-roberta-base-finetuned-panx-de | b4fdfeb5942fa395fce58ff164a55c7e1df31ca0 | 2022-07-03T19:50:43.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | xliu128 | null | xliu128/xlm-roberta-base-finetuned-panx-de | 1 | null | transformers | 33,168 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8627004891366169
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1363
- F1: 0.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2539 | 1.0 | 525 | 0.1697 | 0.8179 |
| 0.1317 | 2.0 | 1050 | 0.1327 | 0.8516 |
| 0.0819 | 3.0 | 1575 | 0.1363 | 0.8627 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
josh-oo/german-easy-backtranslation | 06ec725b29ab67bd1215e52918c6e9b8888fc27f | 2022-07-03T20:09:05.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | josh-oo | null | josh-oo/german-easy-backtranslation | 1 | null | transformers | 33,169 | Entry not found |
markrogersjr/codeparrot-ds | 46085cea49a7f1cb98e797b7c181af8366823985 | 2022-07-03T21:58:46.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | markrogersjr | null | markrogersjr/codeparrot-ds | 1 | null | transformers | 33,170 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
seoyoung/bart-base-samsum | 792780accf0f026f4446fcdc696da9e45f663bef | 2022-07-03T23:42:36.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seoyoung | null | seoyoung/bart-base-samsum | 1 | null | transformers | 33,171 | Entry not found |
seoyoung/bart_r3f_sample | d420f03fced7217e2d13654a76b45e0406a52d8b | 2022-07-04T00:15:01.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seoyoung | null | seoyoung/bart_r3f_sample | 1 | null | transformers | 33,172 | Entry not found |
seoyoung/BART_BaseModel2 | 80ad5bad5a28e0cfc252952df0f9f4fb79ecdccf | 2022-07-04T00:44:43.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seoyoung | null | seoyoung/BART_BaseModel2 | 1 | null | transformers | 33,173 | Entry not found |
yslee/wav2vec2-xlsr-libritts-notebook | aebb36ab4b645f074cedf3928df7b188c7324114 | 2022-07-04T05:23:56.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | yslee | null | yslee/wav2vec2-xlsr-libritts-notebook | 1 | null | transformers | 33,174 | Entry not found |
huggingtweets/mattysino | 603fc31f2aa175e7d70abc236924482292fe470e | 2022-07-04T00:53:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/mattysino | 1 | null | transformers | 33,175 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1542286826819534849/KuQaXl___400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Matthew Graham</div>
<div style="text-align: center; font-size: 14px;">@mattysino</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Matthew Graham.
| Data | Matthew Graham |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 155 |
| Short tweets | 980 |
| Tweets kept | 2115 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bb84l50/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mattysino's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3nj8ejqx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3nj8ejqx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mattysino')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
haesun/xlm-roberta-base-finetuned-panx-de-fr | c6c118b0016200a443b0d08b1e238756eda5f069 | 2022-07-05T00:26:38.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | haesun | null | haesun/xlm-roberta-base-finetuned-panx-de-fr | 1 | null | transformers | 33,176 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1724
- F1: 0.8624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2837 | 1.0 | 1073 | 0.1858 | 0.8229 |
| 0.1446 | 2.0 | 2146 | 0.1651 | 0.8467 |
| 0.0917 | 3.0 | 3219 | 0.1724 | 0.8624 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
haesun/xlm-roberta-base-finetuned-panx-fr | 61f4c060c1244b852d8a99bb1251d257a97d7df0 | 2022-07-05T00:43:44.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | haesun | null | haesun/xlm-roberta-base-finetuned-panx-fr | 1 | null | transformers | 33,177 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.9324554986588638
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1031
- F1: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5618 | 1.0 | 287 | 0.2482 | 0.8121 |
| 0.2582 | 2.0 | 574 | 0.1368 | 0.9068 |
| 0.1653 | 3.0 | 861 | 0.1031 | 0.9325 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Renukswamy/roberta-base-squad2-finetuned-squad | 756b21930541369d45ff5c301b11af9314268b07 | 2022-07-09T14:24:58.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Renukswamy | null | Renukswamy/roberta-base-squad2-finetuned-squad | 1 | null | transformers | 33,178 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: roberta-base-squad2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad2-finetuned-squad
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.2691 | 1.0 | 6795 | 0.2947 |
| 0.1761 | 2.0 | 13590 | 0.3582 |
| 0.0953 | 3.0 | 20385 | 0.4446 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
camilag/bert-finetuned-squad-accelerate-3 | 14c0e928d9f84926e119132d02d9fc6f7bee2430 | 2022-07-09T18:05:07.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | camilag | null | camilag/bert-finetuned-squad-accelerate-3 | 1 | null | transformers | 33,179 | Entry not found |
tner/roberta-base-tweetner-2020-2021-concat | 94e344e68d3db9f1b2a577e4713485498d1b2959 | 2022-07-11T22:36:13.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/roberta-base-tweetner-2020-2021-concat | 1 | null | transformers | 33,180 | Entry not found |
Siyong/MT | 74c738e82895374497361078a4e95023e4520b93 | 2022-07-14T15:59:34.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Siyong | null | Siyong/MT | 1 | null | transformers | 33,181 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec-base-Millad_TIMIT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec-base-Millad_TIMIT
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3772
- Wer: 0.6859
- Cer: 0.3217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| No log | 2.36 | 2000 | 2.6233 | 1.0130 | 0.6241 |
| No log | 4.73 | 4000 | 2.2206 | 0.9535 | 0.5032 |
| No log | 7.09 | 6000 | 2.3036 | 0.9368 | 0.5063 |
| 1.235 | 9.46 | 8000 | 1.9932 | 0.9275 | 0.5032 |
| 1.235 | 11.82 | 10000 | 2.0207 | 0.8922 | 0.4498 |
| 1.235 | 14.18 | 12000 | 1.6171 | 0.7993 | 0.3976 |
| 1.235 | 16.55 | 14000 | 1.6729 | 0.8309 | 0.4209 |
| 0.2779 | 18.91 | 16000 | 1.7043 | 0.8141 | 0.4340 |
| 0.2779 | 21.28 | 18000 | 1.7426 | 0.7658 | 0.3960 |
| 0.2779 | 23.64 | 20000 | 1.5230 | 0.7361 | 0.3830 |
| 0.2779 | 26.0 | 22000 | 1.4286 | 0.7658 | 0.3794 |
| 0.1929 | 28.37 | 24000 | 1.4450 | 0.7379 | 0.3644 |
| 0.1929 | 30.73 | 26000 | 1.5922 | 0.7491 | 0.3826 |
| 0.1929 | 33.1 | 28000 | 1.4443 | 0.7454 | 0.3617 |
| 0.1929 | 35.46 | 30000 | 1.5450 | 0.7268 | 0.3621 |
| 0.1394 | 37.83 | 32000 | 1.9268 | 0.7491 | 0.3763 |
| 0.1394 | 40.19 | 34000 | 1.7094 | 0.7342 | 0.3783 |
| 0.1394 | 42.55 | 36000 | 1.4024 | 0.7082 | 0.3494 |
| 0.1394 | 44.92 | 38000 | 1.4467 | 0.6840 | 0.3395 |
| 0.104 | 47.28 | 40000 | 1.4145 | 0.6933 | 0.3407 |
| 0.104 | 49.65 | 42000 | 1.3901 | 0.6970 | 0.3403 |
| 0.104 | 52.01 | 44000 | 1.3589 | 0.6636 | 0.3348 |
| 0.104 | 54.37 | 46000 | 1.3716 | 0.6952 | 0.3340 |
| 0.0781 | 56.74 | 48000 | 1.4025 | 0.6896 | 0.3312 |
| 0.0781 | 59.1 | 50000 | 1.3772 | 0.6859 | 0.3217 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Team-PIXEL/pixel-base-finetuned-parsing-ud-hindi-hdtb | 46ab384e6585d9c19025514c4cc508a6f15698ec | 2022-07-13T15:08:58.000Z | [
"pytorch",
"pixel",
"transformers"
] | null | false | Team-PIXEL | null | Team-PIXEL/pixel-base-finetuned-parsing-ud-hindi-hdtb | 1 | null | transformers | 33,182 | Entry not found |
Team-PIXEL/pixel-base-finetuned-parsing-ud-tamil-ttb | 11af9376ea96e2c6802f7b3a6636c76149f1a631 | 2022-07-13T15:31:20.000Z | [
"pytorch",
"pixel",
"transformers"
] | null | false | Team-PIXEL | null | Team-PIXEL/pixel-base-finetuned-parsing-ud-tamil-ttb | 1 | null | transformers | 33,183 | Entry not found |
Team-PIXEL/pixel-base-finetuned-parsing-ud-vietnamese-vtb | bb33418ad0fcab79070039021427c08b5173f690 | 2022-07-13T15:38:52.000Z | [
"pytorch",
"pixel",
"transformers"
] | null | false | Team-PIXEL | null | Team-PIXEL/pixel-base-finetuned-parsing-ud-vietnamese-vtb | 1 | null | transformers | 33,184 | Entry not found |
Siyong/MC | 096663fd9e3e802931936294fb74ce42dede500c | 2022-07-14T10:48:35.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Siyong | null | Siyong/MC | 1 | null | transformers | 33,185 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec-base-All
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec-base-All
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0545
- Wer: 0.8861
- Cer: 0.5014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 120
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:-----:|:---------------:|:------:|:------:|
| No log | 3.33 | 500 | 4.0654 | 1.0 | 0.9823 |
| No log | 6.67 | 1000 | 3.4532 | 1.0 | 0.9823 |
| No log | 10.0 | 1500 | 3.0707 | 0.9992 | 0.9781 |
| No log | 13.33 | 2000 | 2.7335 | 1.0017 | 0.9027 |
| No log | 16.67 | 2500 | 2.5896 | 1.0690 | 0.7302 |
| No log | 20.0 | 3000 | 2.3315 | 1.0690 | 0.6677 |
| No log | 23.33 | 3500 | 2.2217 | 1.0150 | 0.5966 |
| No log | 26.67 | 4000 | 2.3802 | 1.0549 | 0.5948 |
| No log | 30.0 | 4500 | 2.2208 | 0.9975 | 0.5681 |
| 2.4224 | 33.33 | 5000 | 2.2687 | 0.9800 | 0.5537 |
| 2.4224 | 36.67 | 5500 | 2.3169 | 0.9476 | 0.5493 |
| 2.4224 | 40.0 | 6000 | 2.5196 | 0.9900 | 0.5509 |
| 2.4224 | 43.33 | 6500 | 2.4816 | 0.9501 | 0.5272 |
| 2.4224 | 46.67 | 7000 | 2.4894 | 0.9485 | 0.5276 |
| 2.4224 | 50.0 | 7500 | 2.4555 | 0.9418 | 0.5305 |
| 2.4224 | 53.33 | 8000 | 2.7326 | 0.9559 | 0.5255 |
| 2.4224 | 56.67 | 8500 | 2.5514 | 0.9227 | 0.5209 |
| 2.4224 | 60.0 | 9000 | 2.9135 | 0.9717 | 0.5455 |
| 2.4224 | 63.33 | 9500 | 3.0465 | 0.8346 | 0.5002 |
| 0.8569 | 66.67 | 10000 | 2.8177 | 0.9302 | 0.5216 |
| 0.8569 | 70.0 | 10500 | 2.9908 | 0.9310 | 0.5128 |
| 0.8569 | 73.33 | 11000 | 3.1752 | 0.9235 | 0.5284 |
| 0.8569 | 76.67 | 11500 | 2.7412 | 0.8886 | 0.5 |
| 0.8569 | 80.0 | 12000 | 2.7362 | 0.9127 | 0.5040 |
| 0.8569 | 83.33 | 12500 | 2.9636 | 0.9152 | 0.5093 |
| 0.8569 | 86.67 | 13000 | 3.0139 | 0.9011 | 0.5097 |
| 0.8569 | 90.0 | 13500 | 2.8325 | 0.8853 | 0.5032 |
| 0.8569 | 93.33 | 14000 | 3.0383 | 0.8845 | 0.5056 |
| 0.8569 | 96.67 | 14500 | 2.7931 | 0.8795 | 0.4965 |
| 0.3881 | 100.0 | 15000 | 2.8972 | 0.8928 | 0.5012 |
| 0.3881 | 103.33 | 15500 | 2.7780 | 0.8736 | 0.4947 |
| 0.3881 | 106.67 | 16000 | 3.1081 | 0.9036 | 0.5109 |
| 0.3881 | 110.0 | 16500 | 3.0078 | 0.8928 | 0.5032 |
| 0.3881 | 113.33 | 17000 | 3.0245 | 0.8886 | 0.5009 |
| 0.3881 | 116.67 | 17500 | 3.0739 | 0.8928 | 0.5065 |
| 0.3881 | 120.0 | 18000 | 3.0545 | 0.8861 | 0.5014 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
affahrizain/xlm-roberta-base-finetuned-panx-de | 0b97aa9bdad3cb40ea7114e3c58995ea549e0e4d | 2022-07-16T06:56:45.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | affahrizain | null | affahrizain/xlm-roberta-base-finetuned-panx-de | 1 | null | transformers | 33,186 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
PGT/orig-nystromformer-s-artificial-balanced-max500-490000-0 | 75cf330c50171411a5655327ff55fe52cc1d2dbc | 2022-07-15T18:30:25.000Z | [
"pytorch",
"graph_nystromformer",
"text-classification",
"transformers"
] | text-classification | false | PGT | null | PGT/orig-nystromformer-s-artificial-balanced-max500-490000-0 | 1 | null | transformers | 33,187 | Entry not found |
kotter/bert-l18-2207-grad1 | e166cd84a6515789968a53ee00826722864d18b5 | 2022-07-29T16:20:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | kotter | null | kotter/bert-l18-2207-grad1 | 1 | null | transformers | 33,188 | Entry not found |
kotter/bert-base-2207-nogroup | 1feb14c2803b83fe6c5c5aeefcd3831f8362da5e | 2022-07-26T17:23:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | kotter | null | kotter/bert-base-2207-nogroup | 1 | null | transformers | 33,189 | Entry not found |
donggyukimc/retriever-220626-ict-mono | 9b28bdcc2f967679f7a6c1d4396140e44c57ce3c | 2022-07-20T11:52:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | donggyukimc | null | donggyukimc/retriever-220626-ict-mono | 1 | null | transformers | 33,190 | Entry not found |
mtreviso/ct5-small-en-wiki-l2r | eb6dda1c6431ebc3c67a8518096ceca9eef40213 | 2022-07-25T13:22:55.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"en",
"dataset:wikipedia",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | mtreviso | null | mtreviso/ct5-small-en-wiki-l2r | 1 | null | transformers | 33,191 | ---
license: afl-3.0
language: en
tags:
- t5
datasets:
- wikipedia
---
# cT5-small left-to-right
Github: https://github.com/mtreviso/chunked-t5
This is a variant of [cT5](https://huggingface.co/mtreviso/ct5-small-en-wiki) that was trained with a left-to-right autoregressive decoding mask. As a consequence, it does not support parallel decoding, but it still predicts the end-of-chunk token `</c>` at the end of each chunk. |
shengnan/visualize-v2-pre10w-preseed1 | 8833c4610a655641ad98ef7d85e53ca6515e8578 | 2022-07-18T02:55:57.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | shengnan | null | shengnan/visualize-v2-pre10w-preseed1 | 1 | null | transformers | 33,192 | Entry not found |
PGT/orig-graphnystromformer-artificial-balanced-max500-105000-0 | d4d6611e5e389ccef2ba05d1e020561154588d46 | 2022-07-18T11:05:15.000Z | [
"pytorch",
"graph_nystromformer",
"text-classification",
"transformers"
] | text-classification | false | PGT | null | PGT/orig-graphnystromformer-artificial-balanced-max500-105000-0 | 1 | null | transformers | 33,193 | Entry not found |
PGT/orig-nystromformer-l-artificial-balanced-max500-105000-0 | f38e04317ad9877c20fa6df7893a4e55a205a82f | 2022-07-18T21:30:41.000Z | [
"pytorch",
"graph_nystromformer",
"text-classification",
"transformers"
] | text-classification | false | PGT | null | PGT/orig-nystromformer-l-artificial-balanced-max500-105000-0 | 1 | null | transformers | 33,194 | Entry not found |
f00d/Multilingual-MiniLM-L12-H384-MLM-finetuned-wikipedia_bn_custom | 7b2099fce10f9106f65339aacefa9e2d345746b0 | 2022-07-21T12:28:37.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | f00d | null | f00d/Multilingual-MiniLM-L12-H384-MLM-finetuned-wikipedia_bn_custom | 1 | null | transformers | 33,195 | Entry not found |
maesneako/ES_corlec | ff69ed6bb499332c761ee9e6ad69454c6cb2f4eb | 2022-07-28T11:10:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | maesneako | null | maesneako/ES_corlec | 1 | null | transformers | 33,196 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: ES_corlec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ES_corlec
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
csmartins8/xlm-roberta-base-finetuned-panx-de | 6c5a0b9dacc9ba0810da04ee6371fdba4ab3bb4b | 2022-07-29T01:51:43.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | csmartins8 | null | csmartins8/xlm-roberta-base-finetuned-panx-de | 1 | null | transformers | 33,197 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: train
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8631507160718345
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1374
- F1: 0.8632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2583 | 1.0 | 525 | 0.1594 | 0.8198 |
| 0.125 | 2.0 | 1050 | 0.1390 | 0.8483 |
| 0.08 | 3.0 | 1575 | 0.1374 | 0.8632 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
enoriega/rule_learning_1mm_many_negatives_spanpred_mse_attention | 1068775631eba336f6bf852c68d9668d03a5a8e2 | 2022-07-22T20:29:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"dataset:enoriega/odinsynth_dataset",
"transformers",
"generated_from_trainer",
"model-index"
] | null | false | enoriega | null | enoriega/rule_learning_1mm_many_negatives_spanpred_mse_attention | 1 | null | transformers | 33,198 | ---
tags:
- generated_from_trainer
datasets:
- enoriega/odinsynth_dataset
model-index:
- name: rule_learning_1mm_many_negatives_spanpred_avf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rule_learning_1mm_many_negatives_spanpred_avf
This model is a fine-tuned version of [enoriega/rule_softmatching](https://huggingface.co/enoriega/rule_softmatching) on the enoriega/odinsynth_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2000
- total_train_batch_size: 8000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1215 | 0.16 | 20 | 0.1191 |
| 0.1091 | 0.32 | 40 | 0.1079 |
| 0.0993 | 0.48 | 60 | 0.0993 |
| 0.0938 | 0.64 | 80 | 0.0952 |
| 0.085 | 0.8 | 100 | 0.0858 |
| 0.0837 | 0.96 | 120 | 0.0842 |
| 0.0811 | 1.12 | 140 | 0.0827 |
| 0.0799 | 1.28 | 160 | 0.0809 |
| 0.078 | 1.44 | 180 | 0.0786 |
| 0.0792 | 1.6 | 200 | 0.0781 |
| 0.0797 | 1.76 | 220 | 0.0765 |
| 0.0775 | 1.92 | 240 | 0.0758 |
| 0.0735 | 2.08 | 260 | 0.0748 |
| 0.0704 | 2.24 | 280 | 0.0744 |
| 0.0744 | 2.4 | 300 | 0.0737 |
| 0.0752 | 2.56 | 320 | 0.0733 |
| 0.075 | 2.72 | 340 | 0.0738 |
| 0.0701 | 2.88 | 360 | 0.0732 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
mhaegeman/wav2vec2-large-xls-r-300m-dutch-V2 | 2c94ee4cf3ef301e93a4a22e60aab01d90aad81d | 2022-07-26T11:03:34.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | mhaegeman | null | mhaegeman/wav2vec2-large-xls-r-300m-dutch-V2 | 1 | null | transformers | 33,199 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-dutch-V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-dutch-V2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4262
- eval_wer: 0.3052
- eval_runtime: 8417.9087
- eval_samples_per_second: 0.678
- eval_steps_per_second: 0.085
- epoch: 5.33
- step: 2400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.