modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
evolvingstuff/gpt2-wikitext2 | bec9b2c214931645046578cf01ce9902828a6ac6 | 2022-05-16T21:25:11.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | evolvingstuff | null | evolvingstuff/gpt2-wikitext2 | 1 | null | transformers | 32,000 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5628 | 1.0 | 2249 | 6.4705 |
| 6.1956 | 2.0 | 4498 | 6.2012 |
| 6.021 | 3.0 | 6747 | 6.1128 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
crystina-z/mdpr-tied-mmarco-ar | 53c083ca3d6b433810cdc430d2baf78ee064ae71 | 2022-05-16T22:58:09.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | crystina-z | null | crystina-z/mdpr-tied-mmarco-ar | 1 | null | transformers | 32,001 | Entry not found |
crystina-z/mdpr-tied-mmarco-id | 696d54d545aec5e6e83cfcc3d85afaf55df46389 | 2022-05-16T22:59:15.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | crystina-z | null | crystina-z/mdpr-tied-mmarco-id | 1 | null | transformers | 32,002 | Entry not found |
crystina-z/mdpr-tied-mmarco-ja | d9cfce197d9b09bea55ba40ab07cb9238817317b | 2022-05-16T22:54:52.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | crystina-z | null | crystina-z/mdpr-tied-mmarco-ja | 1 | null | transformers | 32,003 | Entry not found |
PSW/cnndm_0.5percent_maxsimdel_seed42 | d9fb785f664077f1254b9d194ace6519938172e9 | 2022-05-16T23:52:53.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_maxsimdel_seed42 | 1 | null | transformers | 32,004 | Entry not found |
atgarcia/wav2vec2-base-timit-demo-google-colab | 7f1969c317e45aa93456342a6fd54b7b884ad88b | 2022-05-21T07:26:10.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | atgarcia | null | atgarcia/wav2vec2-base-timit-demo-google-colab | 1 | null | transformers | 32,005 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5255
- Wer: 0.3330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5942 | 1.0 | 500 | 2.3849 | 1.0011 |
| 0.9765 | 2.01 | 1000 | 0.5907 | 0.5202 |
| 0.4424 | 3.01 | 1500 | 0.4547 | 0.4661 |
| 0.3008 | 4.02 | 2000 | 0.4194 | 0.4228 |
| 0.2316 | 5.02 | 2500 | 0.3933 | 0.4099 |
| 0.1921 | 6.02 | 3000 | 0.4532 | 0.3965 |
| 0.1561 | 7.03 | 3500 | 0.4315 | 0.3777 |
| 0.1378 | 8.03 | 4000 | 0.4463 | 0.3847 |
| 0.1222 | 9.04 | 4500 | 0.4402 | 0.3784 |
| 0.1076 | 10.04 | 5000 | 0.4253 | 0.3735 |
| 0.0924 | 11.04 | 5500 | 0.4844 | 0.3732 |
| 0.0866 | 12.05 | 6000 | 0.4758 | 0.3646 |
| 0.086 | 13.05 | 6500 | 0.6395 | 0.4594 |
| 0.0763 | 14.06 | 7000 | 0.4951 | 0.3647 |
| 0.0684 | 15.06 | 7500 | 0.4870 | 0.3577 |
| 0.0616 | 16.06 | 8000 | 0.5442 | 0.3591 |
| 0.0594 | 17.07 | 8500 | 0.5305 | 0.3606 |
| 0.0613 | 18.07 | 9000 | 0.5434 | 0.3546 |
| 0.0473 | 19.08 | 9500 | 0.4818 | 0.3532 |
| 0.0463 | 20.08 | 10000 | 0.5086 | 0.3514 |
| 0.042 | 21.08 | 10500 | 0.5017 | 0.3484 |
| 0.0365 | 22.09 | 11000 | 0.5129 | 0.3536 |
| 0.0336 | 23.09 | 11500 | 0.5411 | 0.3433 |
| 0.0325 | 24.1 | 12000 | 0.5307 | 0.3424 |
| 0.0282 | 25.1 | 12500 | 0.5261 | 0.3404 |
| 0.0245 | 26.1 | 13000 | 0.5306 | 0.3388 |
| 0.0257 | 27.11 | 13500 | 0.5242 | 0.3369 |
| 0.0234 | 28.11 | 14000 | 0.5216 | 0.3359 |
| 0.0221 | 29.12 | 14500 | 0.5255 | 0.3330 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
LinkTheSinger/DialoGPT-small-Kannav4 | 9bf4ec815e54f52a389f319e3ec7f0251a3bfa8a | 2022-05-17T03:24:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | LinkTheSinger | null | LinkTheSinger/DialoGPT-small-Kannav4 | 1 | null | transformers | 32,006 | ---
tags:
- conversational
---
# Kanna Kamui DialoGPT Model |
Datasaur/distilbert-base-uncased-finetuned-conll2003 | 7010f421172cb46ec7deb22093ad8ae3f41b7554 | 2022-07-14T14:18:28.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:conll2003",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | Datasaur | null | Datasaur/distilbert-base-uncased-finetuned-conll2003 | 1 | null | transformers | 32,007 | ---
language: en
license: apache-2.0
datasets:
- conll2003
--- |
malay-huggingface/wav2vec2-xls-r-1b-mixed | 64f095ea56aa59bf10dcff430a9d46d10c75cf9a | 2022-05-27T12:37:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_keras_callback",
"model-index"
] | automatic-speech-recognition | false | malay-huggingface | null | malay-huggingface/wav2vec2-xls-r-1b-mixed | 1 | null | transformers | 32,008 | ---
tags:
- generated_from_keras_callback
model-index:
- name: wav2vec2-xls-r-1b-mixed
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-mixed
Finetuned https://huggingface.co/facebook/wav2vec2-xls-r-1b on https://github.com/huseinzol05/malaya-speech/tree/master/data/mixed-stt
This model was finetuned on 3 languages,
1. Malay
2. Singlish
3. Mandarin
**This model trained on a single Tesla V100 32GB VRAM, provided by https://keyreply.com/**. |
SreyanG-NVIDIA/wav2vec2-base-demo-colab | 163d5c2d34fe8c0a304419277b3cfe42b551ef92 | 2022-05-25T11:57:42.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | SreyanG-NVIDIA | null | SreyanG-NVIDIA/wav2vec2-base-demo-colab | 1 | null | transformers | 32,009 | Entry not found |
negfir/bert_uncased_L-2_H-128_A-2_wiki103 | 2502a8bfdf709df3ddd680d7c7b5b63535cf0fdb | 2022-05-17T07:33:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-2_H-128_A-2_wiki103 | 1 | null | transformers | 32,010 | Entry not found |
subhasisj/es-kd-XLM-minilmv2-32 | dea4381de34ccc7fca024f05751f066ed9aac13a | 2022-05-17T10:48:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | subhasisj | null | subhasisj/es-kd-XLM-minilmv2-32 | 1 | null | transformers | 32,011 | ---
tags:
- generated_from_trainer
model-index:
- name: es-kd-XLM-minilmv2-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# es-kd-XLM-minilmv2-32
This model is a fine-tuned version of [subhasisj/es-TAPT-MLM-MiniLM](https://huggingface.co/subhasisj/es-TAPT-MLM-MiniLM) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
SomeRandomGuy/tony | 2eabcc8936326b54ac46c4dcf87cc21838b5ab57 | 2022-05-17T10:25:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | SomeRandomGuy | null | SomeRandomGuy/tony | 1 | null | transformers | 32,012 | ---
tags:
- conversational
---
#Tony DialoGPT Model |
tau/False_large_pmi_para0_sent1_span2_itTrue_sargmax_rrFalse_7_1024_0.3_best | f82d4a538b9825e26de673f9f1f8259e2fa50c8a | 2022-05-17T11:26:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/False_large_pmi_para0_sent1_span2_itTrue_sargmax_rrFalse_7_1024_0.3_best | 1 | null | transformers | 32,013 | Entry not found |
mertyrgn/xlm-roberta-base-finetuned-panx-de | 16ee73c8e4d7ea0ac6dff95dbf56d91092ac6d5b | 2022-05-18T05:47:03.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | mertyrgn | null | mertyrgn/xlm-roberta-base-finetuned-panx-de | 1 | null | transformers | 32,014 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.861372046683746
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1390
- F1: 0.8614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2617 | 1.0 | 525 | 0.1550 | 0.8199 |
| 0.1271 | 2.0 | 1050 | 0.1389 | 0.8470 |
| 0.0802 | 3.0 | 1575 | 0.1390 | 0.8614 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
roshnir/bert-base-multi-uncased-en-hi | 3d51aa1b47644ecba190a2bc1383369e1e1fdf1c | 2022-05-17T16:56:12.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | roshnir | null | roshnir/bert-base-multi-uncased-en-hi | 1 | null | transformers | 32,015 | Entry not found |
tau/False_large_pmi_para0_sent1_span2_itTrue_sargmax_rrFalse_8_1024_0.3_best | f1620a37328bd38ae083c7ff246308b41f5b8716 | 2022-05-17T17:38:58.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/False_large_pmi_para0_sent1_span2_itTrue_sargmax_rrFalse_8_1024_0.3_best | 1 | null | transformers | 32,016 | Entry not found |
huggingtweets/lulaoficial-ptbrasil | 11e3c4949bb9c21ffc0256a2ad27bd7dce15c7bf | 2022-05-17T18:46:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/lulaoficial-ptbrasil | 1 | null | transformers | 32,017 | ---
language: en
thumbnail: http://www.huggingtweets.com/lulaoficial-ptbrasil/1652813188143/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1410721079383969795/28HNul1J_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1518543225933512705/T4r0T3SE_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">PT Brasil & Lula</div>
<div style="text-align: center; font-size: 14px;">@lulaoficial-ptbrasil</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from PT Brasil & Lula.
| Data | PT Brasil | Lula |
| --- | --- | --- |
| Tweets downloaded | 3250 | 3247 |
| Retweets | 535 | 705 |
| Short tweets | 116 | 191 |
| Tweets kept | 2599 | 2351 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3n5vn7b0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lulaoficial-ptbrasil's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1dh0f8u4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1dh0f8u4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lulaoficial-ptbrasil')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Tazar/distilgpt2-finetuned-tazar | 661ce87f97d31679a5a61e7045ce05fe4b106d58 | 2022-05-18T09:53:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | Tazar | null | Tazar/distilgpt2-finetuned-tazar | 1 | null | transformers | 32,018 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-tazar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-tazar
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 330 | 1.7004 |
| 1.3379 | 2.0 | 660 | 1.7295 |
| 1.3379 | 3.0 | 990 | 1.7272 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.11.6
|
CEBaB/gpt2.CEBaB.absa.exclusive.seed_42 | decc333ecfbfa92eb4ae8675b4e6831df204451a | 2022-05-17T20:02:34.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | CEBaB | null | CEBaB/gpt2.CEBaB.absa.exclusive.seed_42 | 1 | null | transformers | 32,019 | Entry not found |
CEBaB/gpt2.CEBaB.absa.exclusive.seed_66 | 44f1fc09be4b1d2b97bf6c1fdf163e76fa6327c5 | 2022-05-17T20:14:31.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | CEBaB | null | CEBaB/gpt2.CEBaB.absa.exclusive.seed_66 | 1 | null | transformers | 32,020 | Entry not found |
CEBaB/gpt2.CEBaB.absa.exclusive.seed_77 | 3fffd3508124ba177144acf1adafa1161f3c516d | 2022-05-17T20:26:09.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | CEBaB | null | CEBaB/gpt2.CEBaB.absa.exclusive.seed_77 | 1 | null | transformers | 32,021 | Entry not found |
marcoperez/DialoGPT-small-rickandmorty | cc49445fadce13c9864e63932bd7fee9624328a3 | 2022-05-17T22:01:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | marcoperez | null | marcoperez/DialoGPT-small-rickandmorty | 1 | null | transformers | 32,022 | ---
tags:
- conversational
---
# Rick and Morty DialoGPT Model |
gary109/ai-light-dance_singing_ft_wav2vec2-large-lv60-v2 | 0941ec7c441d6ad7fcba18c54a0d50e4001014b6 | 2022-05-28T05:50:54.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"../AI_Light_Dance.py",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_singing_ft_wav2vec2-large-lv60-v2 | 1 | 1 | transformers | 32,023 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- ../AI_Light_Dance.py
- generated_from_trainer
model-index:
- name: ai-light-dance_singing_ft_wav2vec2-large-lv60-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing_ft_wav2vec2-large-lv60-v2
This model is a fine-tuned version of [gary109/ai-light-dance_singing_ft_wav2vec2-large-lv60](https://huggingface.co/gary109/ai-light-dance_singing_ft_wav2vec2-large-lv60) on the ../AI_LIGHT_DANCE.PY - ONSET-SINGING dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4285
- Wer: 0.1858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2775 | 1.0 | 1106 | 0.4372 | 0.2117 |
| 0.2154 | 2.0 | 2212 | 0.4474 | 0.2044 |
| 0.2023 | 3.0 | 3318 | 0.4372 | 0.1920 |
| 0.186 | 4.0 | 4424 | 0.4285 | 0.1858 |
| 0.1856 | 5.0 | 5530 | 0.4589 | 0.1826 |
| 0.1537 | 6.0 | 6636 | 0.4658 | 0.1774 |
| 0.1337 | 7.0 | 7742 | 0.4769 | 0.1744 |
| 0.108 | 8.0 | 8848 | 0.4604 | 0.1724 |
| 0.1593 | 9.0 | 9954 | 0.4731 | 0.1694 |
| 0.0904 | 10.0 | 11060 | 0.4843 | 0.1683 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.2.2.dev0
- Tokenizers 0.12.1
|
miyagawaorj/xlm-roberta-base-finetuned-panx-de | c9f50f4562d944ec21178d3f4cfb45196aa6518c | 2022-06-07T07:03:42.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | miyagawaorj | null | miyagawaorj/xlm-roberta-base-finetuned-panx-de | 1 | null | transformers | 32,024 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8620945214069894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
CEBaB/bert-base-uncased.CEBaB.absa.inclusive.seed_88 | 15df91833c2559eb18f56b4217ff851f6cf9461f | 2022-05-18T00:44:37.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.absa.inclusive.seed_88 | 1 | null | transformers | 32,025 | Entry not found |
EddieChen372/JESTest | 409d207ebd4165b462a9e4c6b74bac1815526d69 | 2022-06-24T13:11:32.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | EddieChen372 | null | EddieChen372/JESTest | 1 | null | transformers | 32,026 | Entry not found |
Rivenatte/summarize_ruby_codet5_base | 1fa24a50069ec5ca96166ade62cce6f1e004dfd4 | 2022-05-19T03:45:18.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Rivenatte | null | Rivenatte/summarize_ruby_codet5_base | 1 | null | transformers | 32,027 | Entry not found |
MeshalAlamr/wav2vec2-xls-r-300m-ar-8 | c7593ce851577874704bf81d389db09dbd486555 | 2022-05-19T09:22:24.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | MeshalAlamr | null | MeshalAlamr/wav2vec2-xls-r-300m-ar-8 | 1 | null | transformers | 32,028 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-ar-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-ar-8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 76.6942
- Wer: 0.2108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6295.0487 | 4.71 | 400 | 615.8572 | 1.0 |
| 1464.0058 | 9.41 | 800 | 111.7187 | 0.5361 |
| 425.6333 | 14.12 | 1200 | 80.7770 | 0.3446 |
| 280.069 | 18.82 | 1600 | 74.0422 | 0.2980 |
| 213.0118 | 23.53 | 2000 | 78.4876 | 0.2783 |
| 175.6819 | 28.24 | 2400 | 70.4845 | 0.2491 |
| 148.5846 | 32.94 | 2800 | 70.5758 | 0.2443 |
| 131.1029 | 37.65 | 3200 | 75.3770 | 0.2371 |
| 116.7131 | 42.35 | 3600 | 78.7061 | 0.2268 |
| 105.369 | 47.06 | 4000 | 76.4783 | 0.2210 |
| 97.0829 | 51.76 | 4400 | 76.6051 | 0.2153 |
| 90.4009 | 56.47 | 4800 | 76.6942 | 0.2108 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 1.18.4
- Tokenizers 0.11.6
|
negfir/bert_uncased_L-10_H-256_A-4_wiki103 | 8dfe7683f48669dc9d8cbd74d47e719c2f365c68 | 2022-05-18T12:11:55.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-10_H-256_A-4_wiki103 | 1 | null | transformers | 32,029 | Entry not found |
airi/bert-finetuned-protagonist-english | adbbe7cf65611d9ae21356b79c72de2eb2a6cd45 | 2022-05-18T15:28:39.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | airi | null | airi/bert-finetuned-protagonist-english | 1 | null | transformers | 32,030 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-protagonist-english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-protagonist-english
This model is a fine-tuned version of [Jean-Baptiste/roberta-large-ner-english](https://huggingface.co/Jean-Baptiste/roberta-large-ner-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0630
- Precision: 0.8646
- Recall: 0.8839
- F1: 0.8742
- Accuracy: 0.9876
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 25 | 0.0659 | 0.8860 | 0.9018 | 0.8938 | 0.9862 |
| No log | 2.0 | 50 | 0.0583 | 0.8553 | 0.8705 | 0.8628 | 0.9860 |
| No log | 3.0 | 75 | 0.0593 | 0.8728 | 0.8884 | 0.8805 | 0.9876 |
| No log | 4.0 | 100 | 0.0622 | 0.8559 | 0.875 | 0.8653 | 0.9871 |
| No log | 5.0 | 125 | 0.0630 | 0.8646 | 0.8839 | 0.8742 | 0.9876 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 2.2.1
- Tokenizers 0.11.0
|
cromz22/wav2vec2-common_voice-tr-demo-dist | b48c255a4ccbc64b3b3ba5fa5633c623bad8d4fa | 2022-05-18T15:25:11.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | cromz22 | null | cromz22/wav2vec2-common_voice-tr-demo-dist | 1 | null | transformers | 32,031 | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-tr-demo-dist
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tr-demo-dist
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3848
- Wer: 0.3242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 16
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5279 | 0.46 | 100 | 3.6260 | 1.0 |
| 3.1065 | 0.92 | 200 | 3.0854 | 0.9999 |
| 1.4111 | 1.38 | 300 | 1.3343 | 0.8839 |
| 0.8468 | 1.83 | 400 | 0.6920 | 0.6826 |
| 0.6242 | 2.29 | 500 | 0.6001 | 0.5996 |
| 0.4181 | 2.75 | 600 | 0.5655 | 0.5680 |
| 0.4311 | 3.21 | 700 | 0.4478 | 0.5003 |
| 0.3601 | 3.67 | 800 | 0.4548 | 0.5011 |
| 0.2756 | 4.13 | 900 | 0.4444 | 0.4682 |
| 0.2373 | 4.59 | 1000 | 0.4111 | 0.4432 |
| 0.1831 | 5.05 | 1100 | 0.4178 | 0.4447 |
| 0.2423 | 5.5 | 1200 | 0.3881 | 0.4277 |
| 0.2128 | 5.96 | 1300 | 0.3865 | 0.4018 |
| 0.1256 | 6.42 | 1400 | 0.3818 | 0.4137 |
| 0.1038 | 6.88 | 1500 | 0.3739 | 0.3942 |
| 0.1662 | 7.34 | 1600 | 0.3938 | 0.3929 |
| 0.198 | 7.8 | 1700 | 0.3831 | 0.3837 |
| 0.0728 | 8.26 | 1800 | 0.3910 | 0.3867 |
| 0.123 | 8.72 | 1900 | 0.3722 | 0.3735 |
| 0.0776 | 9.17 | 2000 | 0.3938 | 0.3725 |
| 0.1597 | 9.63 | 2100 | 0.3786 | 0.3697 |
| 0.1124 | 10.09 | 2200 | 0.3947 | 0.3590 |
| 0.0965 | 10.55 | 2300 | 0.3952 | 0.3562 |
| 0.0612 | 11.01 | 2400 | 0.3810 | 0.3476 |
| 0.0764 | 11.47 | 2500 | 0.3734 | 0.3507 |
| 0.0973 | 11.93 | 2600 | 0.3935 | 0.3472 |
| 0.0649 | 12.39 | 2700 | 0.3672 | 0.3413 |
| 0.0542 | 12.84 | 2800 | 0.3732 | 0.3369 |
| 0.087 | 13.3 | 2900 | 0.3833 | 0.3458 |
| 0.0196 | 13.76 | 3000 | 0.3761 | 0.3303 |
| 0.0548 | 14.22 | 3100 | 0.3855 | 0.3274 |
| 0.0577 | 14.68 | 3200 | 0.3893 | 0.3238 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
airi/bert-finetuned-protagonist-english-pc | 7c080d74e09815bf74e556b18a73ce4dbf7c6cb2 | 2022-05-18T18:01:04.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | airi | null | airi/bert-finetuned-protagonist-english-pc | 1 | null | transformers | 32,032 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-protagonist-english-pc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-protagonist-english-pc
This model is a fine-tuned version of [Jean-Baptiste/roberta-large-ner-english](https://huggingface.co/Jean-Baptiste/roberta-large-ner-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0351
- Precision: 0.9513
- Recall: 0.9598
- F1: 0.9556
- Accuracy: 0.9919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 100 | 0.0407 | 0.9254 | 0.9420 | 0.9336 | 0.9908 |
| No log | 2.0 | 200 | 0.0351 | 0.9513 | 0.9598 | 0.9556 | 0.9919 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.10.1+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
PSW/cnndm_0.1percent_baseline_seed27 | 516ac855ba154d695775fd6f1a6fe883e50bd8a7 | 2022-05-18T15:56:00.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_baseline_seed27 | 1 | null | transformers | 32,033 | Entry not found |
zakria/wav2vec2-base-timit-demo-google-colab | 515b2d64fd17cf6ac42bc7ca296666daff95ca32 | 2022-05-18T20:44:02.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | zakria | null | zakria/wav2vec2-base-timit-demo-google-colab | 1 | null | transformers | 32,034 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5093
- Wer: 0.3413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5009 | 1.0 | 500 | 1.6207 | 0.9471 |
| 0.8414 | 2.01 | 1000 | 0.5128 | 0.5033 |
| 0.4366 | 3.01 | 1500 | 0.4449 | 0.4450 |
| 0.3015 | 4.02 | 2000 | 0.3835 | 0.4108 |
| 0.2281 | 5.02 | 2500 | 0.3989 | 0.4109 |
| 0.1914 | 6.02 | 3000 | 0.4286 | 0.3982 |
| 0.1555 | 7.03 | 3500 | 0.4547 | 0.3889 |
| 0.1349 | 8.03 | 4000 | 0.3876 | 0.3779 |
| 0.1252 | 9.04 | 4500 | 0.4460 | 0.3810 |
| 0.1066 | 10.04 | 5000 | 0.3905 | 0.3772 |
| 0.0979 | 11.04 | 5500 | 0.4469 | 0.3646 |
| 0.0883 | 12.05 | 6000 | 0.4547 | 0.3612 |
| 0.0801 | 13.05 | 6500 | 0.4741 | 0.3645 |
| 0.0709 | 14.06 | 7000 | 0.4682 | 0.3592 |
| 0.0665 | 15.06 | 7500 | 0.4689 | 0.3647 |
| 0.0579 | 16.06 | 8000 | 0.5330 | 0.3622 |
| 0.0556 | 17.07 | 8500 | 0.4885 | 0.3575 |
| 0.0547 | 18.07 | 9000 | 0.4936 | 0.3543 |
| 0.0462 | 19.08 | 9500 | 0.4928 | 0.3524 |
| 0.0475 | 20.08 | 10000 | 0.5286 | 0.3525 |
| 0.0426 | 21.08 | 10500 | 0.5100 | 0.3550 |
| 0.0364 | 22.09 | 11000 | 0.5372 | 0.3493 |
| 0.0306 | 23.09 | 11500 | 0.5049 | 0.3443 |
| 0.0314 | 24.1 | 12000 | 0.5223 | 0.3519 |
| 0.0261 | 25.1 | 12500 | 0.5380 | 0.3486 |
| 0.0257 | 26.1 | 13000 | 0.5326 | 0.3484 |
| 0.0252 | 27.11 | 13500 | 0.5299 | 0.3446 |
| 0.0226 | 28.11 | 14000 | 0.5174 | 0.3424 |
| 0.0232 | 29.12 | 14500 | 0.5093 | 0.3413 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Dizzykong/gpt2-medium-chunked-eos | 67d90d69960804c456d86c7de921c49d664d85d0 | 2022-05-18T20:06:34.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | Dizzykong | null | Dizzykong/gpt2-medium-chunked-eos | 1 | null | transformers | 32,035 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-medium-chunked-eos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-medium-chunked-eos
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
cromz22/wav2vec2-2-bart-base | 3665b3976e6fdcc370ae428800a64989bff249dd | 2022-05-19T03:52:16.000Z | [
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | cromz22 | null | cromz22/wav2vec2-2-bart-base | 1 | null | transformers | 32,036 | Entry not found |
imamnurby/rob2rand_chen | cdad439904b250768a09f089873de31b7082e78d | 2022-05-19T05:45:18.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | imamnurby | null | imamnurby/rob2rand_chen | 1 | null | transformers | 32,037 | ---
tags:
- generated_from_trainer
model-index:
- name: rob2rand_chen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rob2rand_chen
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0
- Pytorch 1.7.1
- Datasets 2.1.0
- Tokenizers 0.12.1
|
GiordanoB/mT5_multilingual_XLSum-finetuned-summarization | d97a48211a8c1a02f7a857565b8f4ab56c2a471b | 2022-05-19T05:50:45.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | GiordanoB | null | GiordanoB/mT5_multilingual_XLSum-finetuned-summarization | 1 | null | transformers | 32,038 | ---
tags:
- generated_from_trainer
model-index:
- name: mT5_multilingual_XLSum-finetuned-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-summarization
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Gunulhona/tbqgmodel_wiki | c2d53073e2c66daa166fdc6617db3afcc9a266dc | 2022-05-19T06:44:16.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Gunulhona | null | Gunulhona/tbqgmodel_wiki | 1 | null | transformers | 32,039 | Entry not found |
dyyyyyyyy/XTREME_squad_XLM-RoBERTa-base | b3cd3e2c87809714fe4f05c8ceb133e3b01728db | 2022-05-19T07:30:52.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dyyyyyyyy | null | dyyyyyyyy/XTREME_squad_XLM-RoBERTa-base | 1 | null | transformers | 32,040 | Entry not found |
ZQ/Model | 9a0e7388a2ba3e2b877eb5c1722020dabda17f21 | 2022-05-19T08:52:33.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ZQ | null | ZQ/Model | 1 | null | transformers | 32,041 | Entry not found |
ViktorDo/distilbert-base-uncased-finetuned-powo_mgh_pt | 352bd0e939ee026632886f2b62081db073d1cdec | 2022-05-30T09:41:59.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | ViktorDo | null | ViktorDo/distilbert-base-uncased-finetuned-powo_mgh_pt | 1 | null | transformers | 32,042 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-powo_mgh_pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-powo_mgh_pt
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8895 | 1.0 | 119 | 1.2509 |
| 1.2538 | 2.0 | 238 | 1.0763 |
| 1.126 | 3.0 | 357 | 0.9910 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
LaplacesDemon/t5-small-finetuned-xsum | f826c26d2ab600cc8a5a53b67953c6d0d260e8a3 | 2022-05-31T06:26:05.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | LaplacesDemon | null | LaplacesDemon/t5-small-finetuned-xsum | 1 | null | transformers | 32,043 | Entry not found |
PSW/cnndm_0.5percent_baseline_seed42 | 047e86c6c9c3bd10289faa2801ebcf43161cc92e | 2022-05-19T10:37:11.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_baseline_seed42 | 1 | null | transformers | 32,044 | Entry not found |
dyyyyyyyy/XTREME_squad_BERT-base-multilingual-cased | 555f2386d0900429179df1c76594d440dfc47401 | 2022-05-19T15:35:40.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dyyyyyyyy | null | dyyyyyyyy/XTREME_squad_BERT-base-multilingual-cased | 1 | null | transformers | 32,045 | Entry not found |
negfir/bert_uncased_L-8_H-128_A-2_wiki103 | cbd25f501045414ee667c7914578f52e264176c1 | 2022-05-19T14:33:45.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-8_H-128_A-2_wiki103 | 1 | null | transformers | 32,046 | Entry not found |
ruselkomp/deep-pavlov-framebank-5epochs | 8e18fb74d24031bdc9e9458ae371e35724689e31 | 2022-05-19T20:26:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ruselkomp | null | ruselkomp/deep-pavlov-framebank-5epochs | 1 | null | transformers | 32,047 | Entry not found |
dyyyyyyyy/XTREME_panx_XLM-RoBERTa-large | b346b36eaac6ad0e9390f957cc81c6c3a00cd5ae | 2022-05-19T15:45:57.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dyyyyyyyy | null | dyyyyyyyy/XTREME_panx_XLM-RoBERTa-large | 1 | null | transformers | 32,048 | Entry not found |
dyyyyyyyy/XTREME_panx_XLM-RoBERTa-base | 6bcdb7df28afd4f5b314c6a369bbeabf1b0106cb | 2022-05-19T15:45:17.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dyyyyyyyy | null | dyyyyyyyy/XTREME_panx_XLM-RoBERTa-base | 1 | null | transformers | 32,049 | Entry not found |
dyyyyyyyy/XTREME_panx_BERT-base-multilingual-cased | 6eaf8f7cef777ac973de1f378cb53a2f0899a307 | 2022-05-19T15:43:56.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dyyyyyyyy | null | dyyyyyyyy/XTREME_panx_BERT-base-multilingual-cased | 1 | null | transformers | 32,050 | Entry not found |
prodm93/t5_sum1_modelchkpnt1 | 53538fe9d4e930d0412d4fb33be4ab9fbb6fba94 | 2022-05-20T03:39:24.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | prodm93 | null | prodm93/t5_sum1_modelchkpnt1 | 1 | null | transformers | 32,051 | Entry not found |
dyyyyyyyy/XTREME_udpos_XLM-RoBERTa-base | cad8310a5add0c3223169081d0fae0723ec57ebc | 2022-05-20T04:49:29.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dyyyyyyyy | null | dyyyyyyyy/XTREME_udpos_XLM-RoBERTa-base | 1 | null | transformers | 32,052 | Entry not found |
dyyyyyyyy/XTREME_udpos_XLM-RoBERTa-large | 5af1b480b2233e6a1506ab118f8a300ca1356e98 | 2022-05-20T04:50:04.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dyyyyyyyy | null | dyyyyyyyy/XTREME_udpos_XLM-RoBERTa-large | 1 | null | transformers | 32,053 | Entry not found |
dyyyyyyyy/XTREME_udpos_BERT-base-multilingual-cased | 843327dd7d3a14c38ed9066ec3b0cef2428fbc90 | 2022-05-20T04:48:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dyyyyyyyy | null | dyyyyyyyy/XTREME_udpos_BERT-base-multilingual-cased | 1 | null | transformers | 32,054 | Entry not found |
leonweber/biomuppet_base | 551ebb62bcf2fcd51f04e76b9b13a5ca550417ec | 2022-05-20T06:17:55.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | leonweber | null | leonweber/biomuppet_base | 1 | null | transformers | 32,055 | Entry not found |
PSW/cnndm_5percent_minsimdel_seed1 | 3ea9b29d2e7e61a3a89841bdf748d1513543ce1a | 2022-05-20T06:43:57.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_5percent_minsimdel_seed1 | 1 | null | transformers | 32,056 | Entry not found |
imamnurby/rob2rand_chen_w_prefix | d9f1753622228ca19fe2c73ea220386c51e628b3 | 2022-05-20T08:11:12.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | imamnurby | null | imamnurby/rob2rand_chen_w_prefix | 1 | null | transformers | 32,057 | ---
tags:
- generated_from_trainer
model-index:
- name: rob2rand_chen_w_prefix
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rob2rand_chen_w_prefix
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0686
- eval_bleu: 84.3905
- eval_em: 50.0650
- eval_bleu_em: 67.2278
- eval_runtime: 20.8187
- eval_samples_per_second: 36.938
- eval_steps_per_second: 0.624
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0
- Pytorch 1.7.1
- Datasets 2.1.0
- Tokenizers 0.12.1
|
PSW/cnndm_5percent_minsimdel_seed42 | 9627e83bb3a16393c2a54d9f8d12f8be7aeed861 | 2022-05-20T10:00:31.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_5percent_minsimdel_seed42 | 1 | null | transformers | 32,058 | Entry not found |
ejembere/opus-mt-en-ro-finetuned-en-to-ro | 106857f28275554bf6f35eefd8bf4f4a5199b256 | 2022-05-20T10:32:09.000Z | [
"pytorch",
"marian",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ejembere | null | ejembere/opus-mt-en-ro-finetuned-en-to-ro | 1 | null | transformers | 32,059 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
PSW/cnndm_5percent_maxsimdel_seed42 | 902ce3868764991b6e4197ad994cb5e6869291ce | 2022-05-20T12:18:34.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_5percent_maxsimdel_seed42 | 1 | null | transformers | 32,060 | Entry not found |
Santiagot1105/wav2vec2-large-xlsr-es-col-pro-noise | cc7c185299250e7429c02cb4352392ccca80f4c1 | 2022-05-21T15:11:43.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Santiagot1105 | null | Santiagot1105/wav2vec2-large-xlsr-es-col-pro-noise | 1 | null | transformers | 32,061 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-es-col-pro-noise
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-es-col-pro-noise
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-spanish](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0683
- Wer: 0.0601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.118 | 3.88 | 400 | 0.0591 | 0.0633 |
| 0.0838 | 7.77 | 800 | 0.0935 | 0.0936 |
| 0.0583 | 11.65 | 1200 | 0.0765 | 0.0716 |
| 0.0392 | 15.53 | 1600 | 0.0843 | 0.0738 |
| 0.0346 | 19.42 | 2000 | 0.0763 | 0.0603 |
| 0.0262 | 23.3 | 2400 | 0.0718 | 0.0610 |
| 0.0208 | 27.18 | 2800 | 0.0683 | 0.0601 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
PSW/cnndm_5percent_randomsimdel_seed42 | b779ebcd654755fa5410d17c54cf88b87a868ea5 | 2022-05-20T14:37:15.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_5percent_randomsimdel_seed42 | 1 | null | transformers | 32,062 | Entry not found |
HueyNemud/das22-44-camembert_finetuned_pero | bdd207b1d956712dfacd0843240c3d153f4238e6 | 2022-05-20T16:17:33.000Z | [
"pytorch",
"camembert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | HueyNemud | null | HueyNemud/das22-44-camembert_finetuned_pero | 1 | null | transformers | 32,063 | Entry not found |
ruselkomp/deep-pavlov-framebank-5epochs-3 | cd016565966650070c5debac1c69bb7809242e88 | 2022-05-20T23:45:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | ruselkomp | null | ruselkomp/deep-pavlov-framebank-5epochs-3 | 1 | null | transformers | 32,064 | ---
tags:
- generated_from_trainer
model-index:
- name: deep-pavlov-framebank-5epochs-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deep-pavlov-framebank-5epochs-3
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0722 | 1.0 | 2827 | 1.0156 |
| 0.797 | 2.0 | 5654 | 1.0431 |
| 0.587 | 3.0 | 8481 | 1.1751 |
| 0.4144 | 4.0 | 11308 | 1.2978 |
| 0.3173 | 5.0 | 14135 | 1.4532 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2.dev0
- Tokenizers 0.12.1
|
HueyNemud/das22-42-camembert_finetuned_ref | b570bdb8e0d9d86125508af3a9f1ffc6d43ce3b7 | 2022-05-20T16:25:01.000Z | [
"pytorch",
"camembert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | HueyNemud | null | HueyNemud/das22-42-camembert_finetuned_ref | 1 | null | transformers | 32,065 | ---
tags:
- generated_from_trainer
model-index:
- name: CamemBERT pretrained on french trade directories from the XIXth century
results: []
---
# CamemBERT trained for NER on french trade directories from the XIXth century [GOLD training set]
This mdoel is part of the material of the paper
> Abadie, N., Carlinet, E., Chazalon, J., Duménieu, B. (2022). A
> Benchmark of Named Entity Recognition Approaches in Historical
> Documents Application to 19𝑡ℎ Century French Directories. In: Uchida,
> S., Barney, E., Eglin, V. (eds) Document Analysis Systems. DAS 2022.
> Lecture Notes in Computer Science, vol 13237. Springer, Cham.
> https://doi.org/10.1007/978-3-031-06555-2_30
The source code to train this model is available on the [GitHub repository](https://github.com/soduco/paper-ner-bench-das22) of the paper as a Jupyter notebook in `src/ner/40_experiment_2.ipynb`.
## Model description
This model adapts the model [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for NER on 6004 manually annotated directory entries referred as the "reference dataset" in the paper.
Trade directory entries are short and strongly structured texts that giving the name, activity and location of a person or business, e.g:
```
Peynaud, R. de la Vieille Bouclerie, 18. Richard, Joullain et comp., (commission- —Phéâtre Français. naire, (entrepôt), au port de la Rapée-
```
## Intended uses & limitations
This model is intended for reproducibility of the NER evaluation published in the DAS2022 paper.
Several derived models trained for NER on trade directories are available on HuggingFace, each trained on a different dataset :
- [das22-10-camembert_pretrained_finetuned_ref](): trained for NER on ~6000 directory entries manually corrected.
- [das22-10-camembert_pretrained_finetuned_pero](): trained for NER on ~6000 directory entries extracted with PERO-OCR.
- [das22-10-camembert_pretrained_finetuned_tess](): trained for NER on ~6000 directory entries extracted with Tesseract.
### Training hyperparameters
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
PSW/cnndm_5percent_minsimins_seed42 | d53d36d52334fd10508ead8a7d7ad74891c42abc | 2022-05-20T16:58:28.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_5percent_minsimins_seed42 | 1 | null | transformers | 32,066 | Entry not found |
anas-awadalla/albert-xl-v2-finetuned-squad | 5a07e60af8c3986104c94c6f37d33a7d381a7dc5 | 2022-05-20T23:29:59.000Z | [
"pytorch",
"albert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/albert-xl-v2-finetuned-squad | 1 | null | transformers | 32,067 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: albert-xl-v2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-xl-v2-finetuned-squad
This model is a fine-tuned version of [albert-xlarge-v2](https://huggingface.co/albert-xlarge-v2) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
PSW/cnndm_5percent_maxsimins_seed42 | 2cc59cdcd564b8878b3307f7b6e5a76396f74201 | 2022-05-20T19:18:38.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_5percent_maxsimins_seed42 | 1 | null | transformers | 32,068 | Entry not found |
rongina/DialoGPT-small-cartman | 15c3be3d767c31118a9d422200e4ce2c519fb0eb | 2022-05-21T03:43:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | rongina | null | rongina/DialoGPT-small-cartman | 1 | null | transformers | 32,069 | ---
tags:
- conversational
---
# South Park Dialog |
remotejob/bert2bertv4_v3 | ae29c052717f2483efc557fd9c225880d8ca2ce5 | 2022-06-07T19:39:07.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | remotejob | null | remotejob/bert2bertv4_v3 | 1 | null | transformers | 32,070 | hello
|
PSW/cnndm_5percent_randomsimins_seed42 | 46ba6bf5e1fd06550b4d0349a2edacbecbfa24e0 | 2022-05-20T21:39:28.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_5percent_randomsimins_seed42 | 1 | null | transformers | 32,071 | Entry not found |
Dizzykong/gpt2-medium-final | a5ef674631955af8a18316eaff69aff9833fba96 | 2022-05-21T02:40:27.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | Dizzykong | null | Dizzykong/gpt2-medium-final | 1 | null | transformers | 32,072 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-medium-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-medium-final
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
PSW/cnndm_5percent_minmaxswap_seed42 | 0072af27a88c1162b4d7651d558b7e2446815a36 | 2022-05-21T00:18:05.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_5percent_minmaxswap_seed42 | 1 | null | transformers | 32,073 | Entry not found |
PSW/cnndm_5percent_min2swap_seed42 | 9425fa7c90b0e34e89c6d4f798c0502e08a76b53 | 2022-05-21T03:00:28.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_5percent_min2swap_seed42 | 1 | null | transformers | 32,074 | Entry not found |
leonweber/electra_small | 1875006beb56130e5b6dc6dc8b17bce7d5a0f979 | 2022-05-21T03:47:45.000Z | [
"pytorch",
"electra",
"feature-extraction",
"transformers"
] | feature-extraction | false | leonweber | null | leonweber/electra_small | 1 | null | transformers | 32,075 | Entry not found |
PSW/cnndm_5percent_max2swap_seed42 | dd3045cc0c8d138484e7aefc93b14fe12fd792a7 | 2022-05-21T05:37:10.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_5percent_max2swap_seed42 | 1 | null | transformers | 32,076 | Entry not found |
PSW/cnndm_5percent_randomswap_seed42 | b537c48c3327419952f97785a874e04e1fbc63aa | 2022-05-21T08:01:52.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_5percent_randomswap_seed42 | 1 | null | transformers | 32,077 | Entry not found |
marksverdhei/pegasus-large-reddit-syac | 34e60498bb8165e75185d5170b3bb38d576b2743 | 2022-07-10T13:18:02.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | marksverdhei | null | marksverdhei/pegasus-large-reddit-syac | 1 | null | transformers | 32,078 | Entry not found |
PSW/cnndm_5percent_baseline_seed42 | e6a1fa4ccbe75e69072391654193e7b2604e0520 | 2022-05-21T09:45:11.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_5percent_baseline_seed42 | 1 | null | transformers | 32,079 | Entry not found |
Splend1dchan/wav2vec2-large-lv60_t5lephone-small_lna_bs64 | eba87e54ba360191e9c262e677e56831e69ceca0 | 2022-05-23T01:57:45.000Z | [
"pytorch",
"speechmix",
"transformers"
] | null | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-lv60_t5lephone-small_lna_bs64 | 1 | null | transformers | 32,080 | Entry not found |
moghis/xlm-roberta-base-finetuned-panx-de-data | a41b4090b814e20d986f15cecb24594ab4627d46 | 2022-05-21T11:08:29.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | moghis | null | moghis/xlm-roberta-base-finetuned-panx-de-data | 1 | null | transformers | 32,081 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
model-index:
- name: xlm-roberta-base-finetuned-panx-de-data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-data
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1 Score: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
north/t5_xl_NCC | c99b489192a64975b35eef4d573dd4c4fa223655 | 2022-06-01T19:42:05.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"no",
"nn",
"sv",
"dk",
"is",
"en",
"dataset:nbailab/NCC",
"dataset:mc4",
"dataset:wikipedia",
"arxiv:2104.09617",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | north | null | north/t5_xl_NCC | 1 | null | transformers | 32,082 | ---
language:
- no
- nn
- sv
- dk
- is
- en
datasets:
- nbailab/NCC
- mc4
- wikipedia
widget:
- text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede.
- text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den.
license: apache-2.0
---
-T5
The North-T5-models are a set of Norwegian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation.
| |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_|
|:-----------|:------------:|:------------:|:------------:|:------------:|:------------:|
|North-T5‑NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|[🤗](https://huggingface.co/north/t5_large_NCC)|✔|[🤗](https://huggingface.co/north/t5_xxl_NCC)||
|North-T5‑NCC‑lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)||
## T5X Checkpoint
The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/xl/norwegian_NCC_plus_English_t5x_xl/).
## Performance
A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617).
|**Model:** | **F1** |
|:-----------|:------------|
|mT5-base|73.2 |
|mBERT-base|78.4 |
|NorBERT-base|78.2 |
|North-T5-small|80.5 |
|nb-bert-base|81.8 |
|North-T5-base|85.3 |
|North-T5-large|86.7 |
|North-T5-xl|88.7 |
|North-T5-xxl|91.8|
These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal.
## Sub-versions of North-T5
The following sub-versions are available. More versions will be available shorter.
|**Model** | **Description** |
|:-----------|:-------|
|**North‑T5‑NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.|
|**North‑T5‑NCC‑lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.|
## Fine-tuned versions
As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab.
Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used.
* Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base)
* DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base)
## Training details
All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources.
All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks.
While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab.
All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning.
## Formats
All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format.
## Future
I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users
## Thanks
This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running.
Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models.
Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X.
## Warranty
Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases.
## Contact/About
These models were trained by Per E Kummervold. Please contact me on [email protected].
|
imamnurby/rob2rand_chen_w_prefix_tc | 4b0f28554100b388a75a442b4d4771989e5c8f6a | 2022-05-21T12:14:38.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | imamnurby | null | imamnurby/rob2rand_chen_w_prefix_tc | 1 | null | transformers | 32,083 | ---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: rob2rand_chen_w_prefix_tc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rob2rand_chen_w_prefix_tc
This model is a fine-tuned version of [imamnurby/rob2rand_chen_w_prefix](https://huggingface.co/imamnurby/rob2rand_chen_w_prefix) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2749
- Bleu: 83.9120
- Em: 86.2159
- Bleu Em: 85.0639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Em | Bleu Em |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|
| 0.6922 | 0.71 | 500 | 0.2425 | 68.5819 | 79.7927 | 74.1873 |
| 0.086 | 1.42 | 1000 | 0.2480 | 70.9791 | 79.5855 | 75.2823 |
| 0.0865 | 2.13 | 1500 | 0.2567 | 68.7037 | 78.8256 | 73.7646 |
| 0.0758 | 2.84 | 2000 | 0.2483 | 69.4605 | 80.2418 | 74.8512 |
| 0.0683 | 3.55 | 2500 | 0.2662 | 68.3732 | 78.4456 | 73.4094 |
| 0.0643 | 4.26 | 3000 | 0.2700 | 66.5413 | 78.3765 | 72.4589 |
| 0.0596 | 4.97 | 3500 | 0.2611 | 67.4313 | 78.9637 | 73.1975 |
| 0.0519 | 5.68 | 4000 | 0.2697 | 68.3717 | 79.1019 | 73.7368 |
| 0.0478 | 6.39 | 4500 | 0.2914 | 69.7507 | 77.7202 | 73.7354 |
| 0.0461 | 7.1 | 5000 | 0.2776 | 68.5387 | 79.1019 | 73.8203 |
| 0.04 | 7.81 | 5500 | 0.2975 | 67.6316 | 78.1693 | 72.9004 |
| 0.0373 | 8.52 | 6000 | 0.2922 | 68.0161 | 79.4473 | 73.7317 |
| 0.0345 | 9.23 | 6500 | 0.3032 | 69.4580 | 79.2401 | 74.3490 |
| 0.032 | 9.94 | 7000 | 0.3104 | 67.2595 | 79.0328 | 73.1462 |
| 0.0294 | 10.65 | 7500 | 0.3077 | 65.8142 | 78.4801 | 72.1472 |
| 0.0269 | 11.36 | 8000 | 0.3092 | 70.2072 | 78.8601 | 74.5337 |
| 0.026 | 12.07 | 8500 | 0.3117 | 70.4504 | 79.4473 | 74.9489 |
| 0.0229 | 12.78 | 9000 | 0.3114 | 69.4635 | 79.2401 | 74.3518 |
| 0.0215 | 13.49 | 9500 | 0.3143 | 67.3601 | 79.3092 | 73.3346 |
| 0.0205 | 14.2 | 10000 | 0.3176 | 68.4031 | 78.9983 | 73.7007 |
| 0.0195 | 14.91 | 10500 | 0.3253 | 66.5673 | 78.9637 | 72.7655 |
| 0.0173 | 15.62 | 11000 | 0.3377 | 68.7553 | 78.7219 | 73.7386 |
| 0.0164 | 16.34 | 11500 | 0.3377 | 69.2474 | 79.1364 | 74.1919 |
| 0.0161 | 17.05 | 12000 | 0.3371 | 69.0846 | 79.6200 | 74.3523 |
| 0.0148 | 17.76 | 12500 | 0.3457 | 70.8330 | 79.3782 | 75.1056 |
| 0.0137 | 18.47 | 13000 | 0.3516 | 69.5576 | 79.2401 | 74.3988 |
| 0.0135 | 19.18 | 13500 | 0.3573 | 70.3232 | 79.1364 | 74.7298 |
| 0.0127 | 19.89 | 14000 | 0.3574 | 70.2481 | 79.1019 | 74.6750 |
| 0.0115 | 20.6 | 14500 | 0.3694 | 65.7587 | 78.3765 | 72.0676 |
| 0.0107 | 21.31 | 15000 | 0.3696 | 68.7923 | 78.5838 | 73.6880 |
| 0.0107 | 22.02 | 15500 | 0.3607 | 69.4452 | 78.8256 | 74.1354 |
| 0.0101 | 22.73 | 16000 | 0.3770 | 68.6731 | 78.5492 | 73.6112 |
| 0.0095 | 23.44 | 16500 | 0.3648 | 69.8402 | 79.7237 | 74.7819 |
| 0.0088 | 24.15 | 17000 | 0.3822 | 69.6238 | 79.0328 | 74.3283 |
| 0.0088 | 24.86 | 17500 | 0.3816 | 68.5422 | 79.1364 | 73.8393 |
| 0.0079 | 25.57 | 18000 | 0.3822 | 69.1359 | 79.2401 | 74.1880 |
| 0.0073 | 26.28 | 18500 | 0.3742 | 69.8331 | 79.6891 | 74.7611 |
| 0.007 | 26.99 | 19000 | 0.3849 | 69.5048 | 79.2746 | 74.3897 |
| 0.0072 | 27.7 | 19500 | 0.3881 | 69.6135 | 79.2055 | 74.4095 |
| 0.0059 | 28.41 | 20000 | 0.3922 | 70.2656 | 79.2746 | 74.7701 |
| 0.0069 | 29.12 | 20500 | 0.3936 | 68.2044 | 78.7910 | 73.4977 |
| 0.0059 | 29.83 | 21000 | 0.3983 | 69.6257 | 79.4473 | 74.5365 |
| 0.0055 | 30.54 | 21500 | 0.3973 | 70.4039 | 79.5509 | 74.9774 |
| 0.0057 | 31.25 | 22000 | 0.3960 | 70.3015 | 79.6546 | 74.9780 |
| 0.0056 | 31.96 | 22500 | 0.3945 | 69.9785 | 79.5855 | 74.7820 |
| 0.0049 | 32.67 | 23000 | 0.3947 | 70.1822 | 79.6546 | 74.9184 |
| 0.0049 | 33.38 | 23500 | 0.3957 | 69.1207 | 79.3437 | 74.2322 |
| 0.0048 | 34.09 | 24000 | 0.4097 | 68.8815 | 78.9292 | 73.9053 |
| 0.0043 | 34.8 | 24500 | 0.4039 | 70.0982 | 79.4473 | 74.7727 |
| 0.0044 | 35.51 | 25000 | 0.4080 | 69.3472 | 79.5164 | 74.4318 |
| 0.0042 | 36.22 | 25500 | 0.4066 | 69.0213 | 79.0674 | 74.0443 |
| 0.0038 | 36.93 | 26000 | 0.4128 | 69.1452 | 79.3092 | 74.2272 |
| 0.0037 | 37.64 | 26500 | 0.4134 | 69.2672 | 79.5164 | 74.3918 |
| 0.0034 | 38.35 | 27000 | 0.4161 | 69.7751 | 79.5509 | 74.6630 |
| 0.0038 | 39.06 | 27500 | 0.4037 | 69.4092 | 79.6546 | 74.5319 |
| 0.0031 | 39.77 | 28000 | 0.4041 | 69.3912 | 79.6546 | 74.5229 |
| 0.0032 | 40.48 | 28500 | 0.4185 | 69.1159 | 79.4473 | 74.2816 |
| 0.0031 | 41.19 | 29000 | 0.4245 | 68.6867 | 78.9983 | 73.8425 |
| 0.003 | 41.9 | 29500 | 0.4202 | 69.4091 | 79.3092 | 74.3591 |
| 0.0027 | 42.61 | 30000 | 0.4249 | 68.7400 | 79.0328 | 73.8864 |
| 0.0026 | 43.32 | 30500 | 0.4175 | 69.9729 | 79.8273 | 74.9001 |
| 0.0027 | 44.03 | 31000 | 0.4189 | 69.6688 | 79.5855 | 74.6271 |
| 0.0027 | 44.74 | 31500 | 0.4203 | 69.4071 | 79.5855 | 74.4963 |
| 0.0025 | 45.45 | 32000 | 0.4265 | 69.3197 | 79.1019 | 74.2108 |
| 0.0023 | 46.16 | 32500 | 0.4255 | 69.7513 | 79.3437 | 74.5475 |
| 0.0023 | 46.88 | 33000 | 0.4227 | 69.2893 | 79.5509 | 74.4201 |
| 0.0023 | 47.59 | 33500 | 0.4233 | 69.6060 | 79.5509 | 74.5785 |
| 0.002 | 48.3 | 34000 | 0.4239 | 69.0113 | 79.4819 | 74.2466 |
| 0.0024 | 49.01 | 34500 | 0.4239 | 68.9754 | 79.4128 | 74.1941 |
| 0.0019 | 49.72 | 35000 | 0.4228 | 68.9220 | 79.3782 | 74.1501 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.7.1
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingtweets/morrowind_rtf | 5390dacbfa5b0606ffe98cbaa4245be04dd92348 | 2022-05-21T18:30:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/morrowind_rtf | 1 | null | transformers | 32,084 | ---
language: en
thumbnail: http://www.huggingtweets.com/morrowind_rtf/1653157827665/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1260443885102411779/DMPXS0hi_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">morrowind.rtf</div>
<div style="text-align: center; font-size: 14px;">@morrowind_rtf</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from morrowind.rtf.
| Data | morrowind.rtf |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 0 |
| Short tweets | 26 |
| Tweets kept | 3224 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3sgyg1y6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @morrowind_rtf's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3hz9ik0o) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3hz9ik0o/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/morrowind_rtf')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
chrisvinsen/wav2vec2-large-xlsr-53 | 2012680cb88dda4472a68a2601a98ff8a2e413e7 | 2022-05-21T22:40:29.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/wav2vec2-large-xlsr-53 | 1 | null | transformers | 32,085 | Entry not found |
drscotthawley/wav2vec2-base-timit-demo-google-colab | 1b186264674c06d50cd242462f0b242d4c72b869 | 2022-05-21T23:41:05.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | drscotthawley | null | drscotthawley/wav2vec2-base-timit-demo-google-colab | 1 | null | transformers | 32,086 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5436
- Wer: 0.3401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5276 | 1.0 | 500 | 1.9983 | 1.0066 |
| 0.8606 | 2.01 | 1000 | 0.5323 | 0.5220 |
| 0.4339 | 3.01 | 1500 | 0.4697 | 0.4512 |
| 0.3026 | 4.02 | 2000 | 0.4342 | 0.4266 |
| 0.2297 | 5.02 | 2500 | 0.5001 | 0.4135 |
| 0.1939 | 6.02 | 3000 | 0.4350 | 0.3897 |
| 0.1613 | 7.03 | 3500 | 0.4740 | 0.3883 |
| 0.1452 | 8.03 | 4000 | 0.4289 | 0.3825 |
| 0.1362 | 9.04 | 4500 | 0.4721 | 0.3927 |
| 0.1146 | 10.04 | 5000 | 0.4707 | 0.3730 |
| 0.1061 | 11.04 | 5500 | 0.4470 | 0.3701 |
| 0.0947 | 12.05 | 6000 | 0.4694 | 0.3722 |
| 0.0852 | 13.05 | 6500 | 0.5222 | 0.3733 |
| 0.0741 | 14.06 | 7000 | 0.4881 | 0.3657 |
| 0.069 | 15.06 | 7500 | 0.4957 | 0.3677 |
| 0.0679 | 16.06 | 8000 | 0.5241 | 0.3634 |
| 0.0618 | 17.07 | 8500 | 0.5091 | 0.3564 |
| 0.0576 | 18.07 | 9000 | 0.5055 | 0.3557 |
| 0.0493 | 19.08 | 9500 | 0.5013 | 0.3515 |
| 0.0469 | 20.08 | 10000 | 0.5506 | 0.3530 |
| 0.044 | 21.08 | 10500 | 0.5564 | 0.3528 |
| 0.0368 | 22.09 | 11000 | 0.5213 | 0.3509 |
| 0.0355 | 23.09 | 11500 | 0.5707 | 0.3495 |
| 0.0357 | 24.1 | 12000 | 0.5558 | 0.3483 |
| 0.0285 | 25.1 | 12500 | 0.5613 | 0.3455 |
| 0.0285 | 26.1 | 13000 | 0.5533 | 0.3480 |
| 0.0266 | 27.11 | 13500 | 0.5526 | 0.3462 |
| 0.0249 | 28.11 | 14000 | 0.5488 | 0.3429 |
| 0.0237 | 29.12 | 14500 | 0.5436 | 0.3401 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu115
- Datasets 1.18.3
- Tokenizers 0.12.1
|
chrisvinsen/wav2vec2-1 | ba1bc10b4931f0b007482057b24ce4f16d4326fb | 2022-05-22T04:53:44.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/wav2vec2-1 | 1 | null | transformers | 32,087 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5980
- Wer: 0.4949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2691 | 1.37 | 200 | 2.9045 | 1.0 |
| 1.6356 | 2.74 | 400 | 0.9277 | 0.8678 |
| 0.8062 | 4.11 | 600 | 0.8200 | 0.7776 |
| 0.5983 | 5.48 | 800 | 0.6829 | 0.7161 |
| 0.4863 | 6.85 | 1000 | 0.6205 | 0.6507 |
| 0.407 | 8.22 | 1200 | 0.6519 | 0.6763 |
| 0.3641 | 9.59 | 1400 | 0.5771 | 0.6088 |
| 0.3291 | 10.96 | 1600 | 0.6548 | 0.6202 |
| 0.2905 | 12.33 | 1800 | 0.6538 | 0.5828 |
| 0.2613 | 13.7 | 2000 | 0.6281 | 0.5864 |
| 0.2354 | 15.07 | 2200 | 0.5936 | 0.5630 |
| 0.2145 | 16.44 | 2400 | 0.5877 | 0.5699 |
| 0.2008 | 17.81 | 2600 | 0.5469 | 0.5488 |
| 0.1751 | 19.18 | 2800 | 0.6453 | 0.5584 |
| 0.169 | 20.55 | 3000 | 0.5871 | 0.5357 |
| 0.1521 | 21.92 | 3200 | 0.6063 | 0.5318 |
| 0.1426 | 23.29 | 3400 | 0.5609 | 0.5171 |
| 0.1287 | 24.66 | 3600 | 0.6056 | 0.5126 |
| 0.1236 | 26.03 | 3800 | 0.5994 | 0.5074 |
| 0.1138 | 27.4 | 4000 | 0.5980 | 0.4944 |
| 0.1083 | 28.77 | 4200 | 0.5980 | 0.4949 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
PeritusDux/DialoGPT-small-rick | f4d4296412803f26ee9b7523e5a9c91643e5df9c | 2022-05-22T05:58:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | PeritusDux | null | PeritusDux/DialoGPT-small-rick | 1 | null | transformers | 32,088 | ---
tags:
- conversational
---
# Rick DialoGPT Model |
chrisvinsen/wav2vec2-2 | 463264a1fca27bf2ae62d8f6aee39352a9bc8595 | 2022-05-22T09:19:16.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/wav2vec2-2 | 1 | null | transformers | 32,089 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9253
- Wer: 0.8133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.4469 | 0.34 | 200 | 3.7440 | 1.0 |
| 3.1152 | 0.69 | 400 | 3.3755 | 1.0 |
| 2.9228 | 1.03 | 600 | 3.0427 | 1.0 |
| 2.8661 | 1.38 | 800 | 2.9406 | 1.0 |
| 2.8402 | 1.72 | 1000 | 2.9034 | 1.0 |
| 2.8301 | 2.07 | 1200 | 2.8850 | 1.0 |
| 2.8088 | 2.41 | 1400 | 2.8479 | 1.0 |
| 2.6892 | 2.75 | 1600 | 2.5800 | 1.0 |
| 2.3249 | 3.1 | 1800 | 2.1310 | 1.0 |
| 1.9687 | 3.44 | 2000 | 1.7652 | 0.9982 |
| 1.7338 | 3.79 | 2200 | 1.5430 | 0.9974 |
| 1.5698 | 4.13 | 2400 | 1.3927 | 0.9985 |
| 1.4475 | 4.48 | 2600 | 1.3186 | 0.9911 |
| 1.3764 | 4.82 | 2800 | 1.2406 | 0.9647 |
| 1.3022 | 5.16 | 3000 | 1.1954 | 0.9358 |
| 1.2409 | 5.51 | 3200 | 1.1450 | 0.8990 |
| 1.1989 | 5.85 | 3400 | 1.1107 | 0.8794 |
| 1.1478 | 6.2 | 3600 | 1.0839 | 0.8667 |
| 1.106 | 6.54 | 3800 | 1.0507 | 0.8573 |
| 1.0792 | 6.88 | 4000 | 1.0179 | 0.8463 |
| 1.0636 | 7.23 | 4200 | 0.9974 | 0.8355 |
| 1.0224 | 7.57 | 4400 | 0.9757 | 0.8343 |
| 1.0166 | 7.92 | 4600 | 0.9641 | 0.8261 |
| 0.9925 | 8.26 | 4800 | 0.9553 | 0.8183 |
| 0.9934 | 8.61 | 5000 | 0.9466 | 0.8199 |
| 0.9741 | 8.95 | 5200 | 0.9353 | 0.8172 |
| 0.9613 | 9.29 | 5400 | 0.9331 | 0.8133 |
| 0.9714 | 9.64 | 5600 | 0.9272 | 0.8144 |
| 0.9593 | 9.98 | 5800 | 0.9253 | 0.8133 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
versae/roberta-base-bne-finetuned-recores2 | 382fd992890fedd5ba3c226422a07c3c119a8a82 | 2022-05-22T08:32:49.000Z | [
"pytorch",
"tensorboard",
"roberta",
"multiple-choice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | multiple-choice | false | versae | null | versae/roberta-base-bne-finetuned-recores2 | 1 | null | transformers | 32,090 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-recores2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-recores2
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.9761
- Accuracy: 0.3113
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.6094 | 1.0 | 1047 | 1.6094 | 0.2259 |
| 1.6094 | 2.0 | 2094 | 1.6094 | 0.2121 |
| 1.6094 | 3.0 | 3141 | 1.6094 | 0.2314 |
| 1.6094 | 4.0 | 4188 | 1.6094 | 0.1956 |
| 1.6094 | 5.0 | 5235 | 1.6094 | 0.2121 |
| 1.6121 | 6.0 | 6282 | 1.6094 | 0.1818 |
| 1.6094 | 7.0 | 7329 | 1.6094 | 0.2259 |
| 1.6092 | 8.0 | 8376 | 1.6094 | 0.1736 |
| 1.6094 | 9.0 | 9423 | 1.6094 | 0.1956 |
| 1.6094 | 10.0 | 10470 | 1.6094 | 0.1736 |
| 1.6094 | 11.0 | 11517 | 1.6094 | 0.1983 |
| 1.6094 | 12.0 | 12564 | 1.6094 | 0.2176 |
| 1.6094 | 13.0 | 13611 | 1.6094 | 0.1928 |
| 1.6096 | 14.0 | 14658 | 1.6094 | 0.1846 |
| 1.6145 | 15.0 | 15705 | 1.6094 | 0.2066 |
| 1.6094 | 16.0 | 16752 | 1.6022 | 0.2121 |
| 1.8471 | 17.0 | 17799 | 1.6101 | 0.1763 |
| 2.8148 | 18.0 | 18846 | 2.7585 | 0.2452 |
| 2.5445 | 19.0 | 19893 | 2.4576 | 0.2920 |
| 1.9972 | 20.0 | 20940 | 3.6002 | 0.2865 |
| 1.9844 | 21.0 | 21987 | 5.3809 | 0.3168 |
| 2.849 | 22.0 | 23034 | 7.2230 | 0.3140 |
| 1.4208 | 23.0 | 24081 | 8.0602 | 0.2975 |
| 0.4045 | 24.0 | 25128 | 8.2947 | 0.3058 |
| 0.3052 | 25.0 | 26175 | 8.9761 | 0.3113 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ak987/distilbert-base-uncased-finetuned-squad | 9e343df85f923b1c28f9f140001b1b828e0292b0 | 2022-05-22T13:07:19.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | ak987 | null | ak987/distilbert-base-uncased-finetuned-squad | 1 | null | transformers | 32,091 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2253 | 1.0 | 5533 | 1.1728 |
| 0.9685 | 2.0 | 11066 | 1.1400 |
| 0.7604 | 3.0 | 16599 | 1.1576 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
abhilashawasthi/bert-base-uncased_dish_descriptions_128 | 5ba365d571ae822a60f8f31a6cd310ce017bbfe6 | 2022-05-22T12:14:38.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | abhilashawasthi | null | abhilashawasthi/bert-base-uncased_dish_descriptions_128 | 1 | null | transformers | 32,092 | Entry not found |
chrisvinsen/wav2vec2-3 | c46db718f8d008dd8ba3457112a72df18b4719e5 | 2022-05-22T13:15:00.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/wav2vec2-3 | 1 | null | transformers | 32,093 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1124
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.7797 | 0.34 | 200 | 3.0703 | 1.0 |
| 2.8701 | 0.69 | 400 | 3.3128 | 1.0 |
| 2.8695 | 1.03 | 600 | 3.1333 | 1.0 |
| 2.8634 | 1.38 | 800 | 3.1634 | 1.0 |
| 2.8629 | 1.72 | 1000 | 3.0432 | 1.0 |
| 2.8652 | 2.07 | 1200 | 3.0300 | 1.0 |
| 2.8602 | 2.41 | 1400 | 3.1894 | 1.0 |
| 2.8622 | 2.75 | 1600 | 3.1950 | 1.0 |
| 2.8606 | 3.1 | 1800 | 3.0656 | 1.0 |
| 2.8605 | 3.44 | 2000 | 3.0614 | 1.0 |
| 2.8595 | 3.79 | 2200 | 3.0697 | 1.0 |
| 2.8504 | 4.13 | 2400 | 3.1404 | 1.0 |
| 2.8553 | 4.48 | 2600 | 3.0682 | 1.0 |
| 2.8585 | 4.82 | 2800 | 3.1393 | 1.0 |
| 2.8567 | 5.16 | 3000 | 3.1013 | 1.0 |
| 2.8539 | 5.51 | 3200 | 3.0740 | 1.0 |
| 2.8588 | 5.85 | 3400 | 3.0616 | 1.0 |
| 2.8509 | 6.2 | 3600 | 3.1032 | 1.0 |
| 2.8589 | 6.54 | 3800 | 3.1348 | 1.0 |
| 2.8505 | 6.88 | 4000 | 3.1514 | 1.0 |
| 2.8548 | 7.23 | 4200 | 3.1319 | 1.0 |
| 2.8466 | 7.57 | 4400 | 3.1412 | 1.0 |
| 2.8549 | 7.92 | 4600 | 3.1235 | 1.0 |
| 2.8532 | 8.26 | 4800 | 3.0751 | 1.0 |
| 2.8548 | 8.61 | 5000 | 3.0946 | 1.0 |
| 2.8513 | 8.95 | 5200 | 3.0840 | 1.0 |
| 2.845 | 9.29 | 5400 | 3.0896 | 1.0 |
| 2.8592 | 9.64 | 5600 | 3.1055 | 1.0 |
| 2.8453 | 9.98 | 5800 | 3.1124 | 1.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
moghis/xlm-roberta-base-finetuned-panx-it | de733f40e22bea6f31d08b9cecc0ba3bdc1b4f3d | 2022-05-22T12:33:39.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | moghis | null | moghis/xlm-roberta-base-finetuned-panx-it | 1 | null | transformers | 32,094 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2380
- F1 Score: 0.8289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7058 | 1.0 | 70 | 0.3183 | 0.7480 |
| 0.2808 | 2.0 | 140 | 0.2647 | 0.8070 |
| 0.1865 | 3.0 | 210 | 0.2380 | 0.8289 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
abhilashawasthi/bert-base-uncased_dish_descriptions_128_0.5M | 07fe3104e47b094ccdee0e426440ba6c9d011278 | 2022-05-22T15:39:27.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | abhilashawasthi | null | abhilashawasthi/bert-base-uncased_dish_descriptions_128_0.5M | 1 | null | transformers | 32,095 | Entry not found |
Leizhang/xlm-roberta-base-finetuned-panx-de-fr | a0b662643bfcacd9d19afd942cc1034dfa68c950 | 2022-05-22T13:45:12.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | Leizhang | null | Leizhang/xlm-roberta-base-finetuned-panx-de-fr | 1 | null | transformers | 32,096 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1631
- F1: 0.8579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2878 | 1.0 | 715 | 0.1840 | 0.8247 |
| 0.1456 | 2.0 | 1430 | 0.1596 | 0.8473 |
| 0.0925 | 3.0 | 2145 | 0.1631 | 0.8579 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
chrisvinsen/wav2vec2-4 | 9a3e87f51c05ff8061ea986df39a1d9021dd2b55 | 2022-05-22T16:29:51.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/wav2vec2-4 | 1 | null | transformers | 32,097 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-4
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1442
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.1303 | 1.37 | 200 | 3.2783 | 1.0 |
| 2.8798 | 2.74 | 400 | 3.1233 | 1.0 |
| 2.8586 | 4.11 | 600 | 3.1612 | 1.0 |
| 2.8613 | 5.48 | 800 | 3.1354 | 1.0 |
| 2.8588 | 6.85 | 1000 | 3.2634 | 1.0 |
| 2.8572 | 8.22 | 1200 | 3.0905 | 1.0 |
| 2.8573 | 9.59 | 1400 | 3.2315 | 1.0 |
| 2.8532 | 10.96 | 1600 | 3.0999 | 1.0 |
| 2.8567 | 12.33 | 1800 | 3.1496 | 1.0 |
| 2.8556 | 13.7 | 2000 | 3.1081 | 1.0 |
| 2.8551 | 15.07 | 2200 | 3.1139 | 1.0 |
| 2.8545 | 16.44 | 2400 | 3.1621 | 1.0 |
| 2.8547 | 17.81 | 2600 | 3.1124 | 1.0 |
| 2.8551 | 19.18 | 2800 | 3.1612 | 1.0 |
| 2.854 | 20.55 | 3000 | 3.1052 | 1.0 |
| 2.8542 | 21.92 | 3200 | 3.1558 | 1.0 |
| 2.8544 | 23.29 | 3400 | 3.1370 | 1.0 |
| 2.8546 | 24.66 | 3600 | 3.1616 | 1.0 |
| 2.8563 | 26.03 | 3800 | 3.1366 | 1.0 |
| 2.8514 | 27.4 | 4000 | 3.1434 | 1.0 |
| 2.8543 | 28.77 | 4200 | 3.1442 | 1.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
fujiki/t5-v1_1-base-en2ja | 2fb89098c6e7d312ce250551cd2ac561fb662388 | 2022-05-22T14:04:33.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | fujiki | null | fujiki/t5-v1_1-base-en2ja | 1 | null | transformers | 32,098 | Entry not found |
spasis/bert-finetuned-squad | 021a26ac11dd209dcc1805e83ed46f8b86f73cb3 | 2022-05-22T15:56:08.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | spasis | null | spasis/bert-finetuned-squad | 1 | null | transformers | 32,099 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.