modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
gary109/wav2vec2-common_voice-tr-demo-dist | 91dfca7f8dcae286fb46a155a3a8fe31c3e90e5d | 2022-04-12T09:12:48.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/wav2vec2-common_voice-tr-demo-dist | 1 | null | transformers | 31,200 | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-tr-demo-dist
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tr-demo-dist
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3934
- Wer: 0.3305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 8
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5459 | 0.23 | 100 | 3.6773 | 1.0 |
| 3.2247 | 0.46 | 200 | 3.1491 | 0.9999 |
| 2.3457 | 0.69 | 300 | 2.4236 | 1.0041 |
| 0.9149 | 0.92 | 400 | 0.9471 | 0.7684 |
| 0.6622 | 1.15 | 500 | 0.7518 | 0.6863 |
| 0.7205 | 1.38 | 600 | 0.6387 | 0.6402 |
| 0.6978 | 1.61 | 700 | 0.5611 | 0.5739 |
| 0.5317 | 1.84 | 800 | 0.5061 | 0.5418 |
| 0.5222 | 2.07 | 900 | 0.4839 | 0.5344 |
| 0.4467 | 2.3 | 1000 | 0.5060 | 0.5339 |
| 0.3196 | 2.53 | 1100 | 0.4619 | 0.5213 |
| 0.276 | 2.76 | 1200 | 0.4595 | 0.5020 |
| 0.3569 | 2.99 | 1300 | 0.4339 | 0.4901 |
| 0.2236 | 3.22 | 1400 | 0.4602 | 0.4887 |
| 0.293 | 3.45 | 1500 | 0.4376 | 0.4639 |
| 0.1677 | 3.68 | 1600 | 0.4371 | 0.4605 |
| 0.1838 | 3.91 | 1700 | 0.4116 | 0.4589 |
| 0.1225 | 4.14 | 1800 | 0.4144 | 0.4495 |
| 0.2301 | 4.37 | 1900 | 0.4250 | 0.4567 |
| 0.1931 | 4.6 | 2000 | 0.4081 | 0.4470 |
| 0.1427 | 4.83 | 2100 | 0.4295 | 0.4482 |
| 0.361 | 5.06 | 2200 | 0.4374 | 0.4445 |
| 0.3272 | 5.29 | 2300 | 0.4088 | 0.4258 |
| 0.3686 | 5.52 | 2400 | 0.4087 | 0.4258 |
| 0.3087 | 5.75 | 2500 | 0.4100 | 0.4371 |
| 0.4637 | 5.98 | 2600 | 0.4038 | 0.4219 |
| 0.1485 | 6.21 | 2700 | 0.4361 | 0.4197 |
| 0.1341 | 6.44 | 2800 | 0.4217 | 0.4132 |
| 0.1185 | 6.67 | 2900 | 0.4244 | 0.4097 |
| 0.1588 | 6.9 | 3000 | 0.4212 | 0.4181 |
| 0.0697 | 7.13 | 3100 | 0.3981 | 0.4073 |
| 0.0491 | 7.36 | 3200 | 0.3992 | 0.4010 |
| 0.088 | 7.59 | 3300 | 0.4206 | 0.4022 |
| 0.0731 | 7.82 | 3400 | 0.3998 | 0.3841 |
| 0.2767 | 8.05 | 3500 | 0.4195 | 0.3829 |
| 0.1725 | 8.28 | 3600 | 0.4167 | 0.3946 |
| 0.1242 | 8.51 | 3700 | 0.4177 | 0.3821 |
| 0.1133 | 8.74 | 3800 | 0.3993 | 0.3802 |
| 0.1952 | 8.97 | 3900 | 0.4132 | 0.3904 |
| 0.1399 | 9.2 | 4000 | 0.4010 | 0.3795 |
| 0.047 | 9.43 | 4100 | 0.4128 | 0.3703 |
| 0.049 | 9.66 | 4200 | 0.4319 | 0.3670 |
| 0.0994 | 9.89 | 4300 | 0.4118 | 0.3631 |
| 0.1209 | 10.11 | 4400 | 0.4296 | 0.3722 |
| 0.0484 | 10.34 | 4500 | 0.4130 | 0.3615 |
| 0.2065 | 10.57 | 4600 | 0.3958 | 0.3668 |
| 0.133 | 10.8 | 4700 | 0.4102 | 0.3679 |
| 0.0622 | 11.03 | 4800 | 0.4137 | 0.3585 |
| 0.0999 | 11.26 | 4900 | 0.4042 | 0.3583 |
| 0.0346 | 11.49 | 5000 | 0.4183 | 0.3573 |
| 0.072 | 11.72 | 5100 | 0.4060 | 0.3530 |
| 0.0365 | 11.95 | 5200 | 0.3968 | 0.3483 |
| 0.0615 | 12.18 | 5300 | 0.3958 | 0.3485 |
| 0.1067 | 12.41 | 5400 | 0.3987 | 0.3453 |
| 0.0253 | 12.64 | 5500 | 0.4182 | 0.3405 |
| 0.0636 | 12.87 | 5600 | 0.4199 | 0.3458 |
| 0.0506 | 13.1 | 5700 | 0.4056 | 0.3412 |
| 0.0944 | 13.33 | 5800 | 0.4061 | 0.3381 |
| 0.1187 | 13.56 | 5900 | 0.4113 | 0.3381 |
| 0.0237 | 13.79 | 6000 | 0.3973 | 0.3343 |
| 0.0166 | 14.02 | 6100 | 0.4001 | 0.3357 |
| 0.1189 | 14.25 | 6200 | 0.3931 | 0.3315 |
| 0.0375 | 14.48 | 6300 | 0.3944 | 0.3329 |
| 0.0537 | 14.71 | 6400 | 0.3953 | 0.3308 |
| 0.045 | 14.94 | 6500 | 0.3933 | 0.3303 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1+cu102
- Datasets 1.13.3
- Tokenizers 0.11.6
|
Kuray107/ls-timit-wsj0-swbd-100percent-supervised-meta | 664dc6b74aef16d2d7bfc7ed1b4d25c04b13cfde | 2022-04-13T06:27:53.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Kuray107 | null | Kuray107/ls-timit-wsj0-swbd-100percent-supervised-meta | 1 | null | transformers | 31,201 | Entry not found |
mrm8488/vit-base-patch16-224-pretrained-cifar10 | 02ef24354b739a7aa0ada5dd752dd3a69c8d21b4 | 2022-04-19T15:10:58.000Z | [
"pytorch",
"tensorboard",
"vit",
"dataset:cifar10",
"transformers",
"masked-image-modeling",
"generated_from_trainer",
"model-index"
] | null | false | mrm8488 | null | mrm8488/vit-base-patch16-224-pretrained-cifar10 | 1 | 1 | transformers | 31,202 | ---
tags:
- masked-image-modeling
- generated_from_trainer
datasets:
- cifar10
model-index:
- name: vit-cifar10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT pre-trained from scratch on CIFAR10
This model is a ViT (with the same arch as Google's [vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) pre-trained from scratch on the cifar10 dataset for masked image modeling.
It achieves the following results on the evaluation set:
- Loss: 0.0891
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.289 | 1.0 | 2657 | 0.2941 |
| 0.2858 | 2.0 | 5314 | 0.2809 |
| 0.2693 | 3.0 | 7971 | 0.2738 |
| 0.2578 | 4.0 | 10628 | 0.2546 |
| 0.2211 | 5.0 | 13285 | 0.2153 |
| 0.1799 | 6.0 | 15942 | 0.1795 |
| 0.158 | 7.0 | 18599 | 0.1623 |
| 0.1481 | 8.0 | 21256 | 0.1453 |
| 0.1391 | 9.0 | 23913 | 0.1368 |
| 0.1348 | 10.0 | 26570 | 0.1354 |
| 0.129 | 11.0 | 29227 | 0.1249 |
| 0.126 | 12.0 | 31884 | 0.1229 |
| 0.1216 | 13.0 | 34541 | 0.1184 |
| 0.1175 | 14.0 | 37198 | 0.1185 |
| 0.1137 | 15.0 | 39855 | 0.1146 |
| 0.1125 | 16.0 | 42512 | 0.1117 |
| 0.1112 | 17.0 | 45169 | 0.1100 |
| 0.1108 | 18.0 | 47826 | 0.1089 |
| 0.1061 | 19.0 | 50483 | 0.1070 |
| 0.1073 | 20.0 | 53140 | 0.1076 |
| 0.1066 | 21.0 | 55797 | 0.1061 |
| 0.1065 | 22.0 | 58454 | 0.1056 |
| 0.1045 | 23.0 | 61111 | 0.1037 |
| 0.1052 | 24.0 | 63768 | 0.1055 |
| 0.102 | 25.0 | 66425 | 0.1028 |
| 0.1025 | 26.0 | 69082 | 0.1034 |
| 0.1037 | 27.0 | 71739 | 0.1025 |
| 0.1022 | 28.0 | 74396 | 0.1014 |
| 0.1026 | 29.0 | 77053 | 0.1011 |
| 0.1022 | 30.0 | 79710 | 0.1001 |
| 0.0997 | 31.0 | 82367 | 0.1007 |
| 0.0998 | 32.0 | 85024 | 0.1016 |
| 0.1019 | 33.0 | 87681 | 0.1008 |
| 0.0999 | 34.0 | 90338 | 0.1000 |
| 0.0998 | 35.0 | 92995 | 0.0993 |
| 0.0994 | 36.0 | 95652 | 0.0992 |
| 0.0966 | 37.0 | 98309 | 0.0991 |
| 0.0997 | 38.0 | 100966 | 0.0970 |
| 0.0991 | 39.0 | 103623 | 0.0979 |
| 0.099 | 40.0 | 106280 | 0.0983 |
| 0.0974 | 41.0 | 108937 | 0.0980 |
| 0.0974 | 42.0 | 111594 | 0.0971 |
| 0.0972 | 43.0 | 114251 | 0.0970 |
| 0.0991 | 44.0 | 116908 | 0.0970 |
| 0.0979 | 45.0 | 119565 | 0.0972 |
| 0.097 | 46.0 | 122222 | 0.0970 |
| 0.0936 | 47.0 | 124879 | 0.0967 |
| 0.0948 | 48.0 | 127536 | 0.0967 |
| 0.0974 | 49.0 | 130193 | 0.0954 |
| 0.0958 | 50.0 | 132850 | 0.0954 |
| 0.0948 | 51.0 | 135507 | 0.0955 |
| 0.095 | 52.0 | 138164 | 0.0953 |
| 0.0939 | 53.0 | 140821 | 0.0945 |
| 0.0961 | 54.0 | 143478 | 0.0948 |
| 0.0964 | 55.0 | 146135 | 0.0955 |
| 0.0934 | 56.0 | 148792 | 0.0948 |
| 0.0965 | 57.0 | 151449 | 0.0943 |
| 0.0966 | 58.0 | 154106 | 0.0941 |
| 0.0926 | 59.0 | 156763 | 0.0938 |
| 0.0928 | 60.0 | 159420 | 0.0942 |
| 0.093 | 61.0 | 162077 | 0.0936 |
| 0.0939 | 62.0 | 164734 | 0.0939 |
| 0.0936 | 63.0 | 167391 | 0.0936 |
| 0.093 | 64.0 | 170048 | 0.0929 |
| 0.0929 | 65.0 | 172705 | 0.0930 |
| 0.0917 | 66.0 | 175362 | 0.0925 |
| 0.0948 | 67.0 | 178019 | 0.0932 |
| 0.0931 | 68.0 | 180676 | 0.0927 |
| 0.0911 | 69.0 | 183333 | 0.0922 |
| 0.0923 | 70.0 | 185990 | 0.0924 |
| 0.0923 | 71.0 | 188647 | 0.0923 |
| 0.0929 | 72.0 | 191304 | 0.0919 |
| 0.0916 | 73.0 | 193961 | 0.0923 |
| 0.0927 | 74.0 | 196618 | 0.0921 |
| 0.0907 | 75.0 | 199275 | 0.0922 |
| 0.0927 | 76.0 | 201932 | 0.0919 |
| 0.0925 | 77.0 | 204589 | 0.0913 |
| 0.0921 | 78.0 | 207246 | 0.0917 |
| 0.0895 | 79.0 | 209903 | 0.0912 |
| 0.0916 | 80.0 | 212560 | 0.0914 |
| 0.09 | 81.0 | 215217 | 0.0909 |
| 0.0916 | 82.0 | 217874 | 0.0908 |
| 0.0902 | 83.0 | 220531 | 0.0907 |
| 0.0911 | 84.0 | 223188 | 0.0910 |
| 0.091 | 85.0 | 225845 | 0.0903 |
| 0.0903 | 86.0 | 228502 | 0.0905 |
| 0.0907 | 87.0 | 231159 | 0.0901 |
| 0.0908 | 88.0 | 233816 | 0.0907 |
| 0.0911 | 89.0 | 236473 | 0.0902 |
| 0.0905 | 90.0 | 239130 | 0.0906 |
| 0.089 | 91.0 | 241787 | 0.0901 |
| 0.0908 | 92.0 | 244444 | 0.0896 |
| 0.0894 | 93.0 | 247101 | 0.0892 |
| 0.0899 | 94.0 | 249758 | 0.0893 |
| 0.0899 | 95.0 | 252415 | 0.0897 |
| 0.0904 | 96.0 | 255072 | 0.0898 |
| 0.0906 | 97.0 | 257729 | 0.0894 |
| 0.0892 | 98.0 | 260386 | 0.0894 |
| 0.0881 | 99.0 | 263043 | 0.0892 |
| 0.09 | 100.0 | 265700 | 0.0894 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
smeoni/nbme-roberta-base | 35539a99f6b7e24d4ec66fedc541d3614921c587 | 2022-04-12T15:18:41.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | smeoni | null | smeoni/nbme-roberta-base | 1 | null | transformers | 31,203 | Entry not found |
McGill-NLP/bart-qg-mlquestions-selftraining | d0f68605d6ae69576dbd0f33f12e43994360544b | 2022-04-12T22:22:52.000Z | [
"pytorch",
"bart",
"text2text-generation",
"arxiv:1910.13461",
"arxiv:2104.08801",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | McGill-NLP | null | McGill-NLP/bart-qg-mlquestions-selftraining | 1 | null | transformers | 31,204 | ---
license: cc-by-4.0
---
# BART-base fine-tuned on NaturalQuestions for **Question Generation**
[BART Model](https://arxiv.org/pdf/1910.13461.pdf) trained for Question Generation in an unsupervised manner using [Self-Training](https://arxiv.org/pdf/2104.08801.pdf) algorithm (Kulshreshtha et al, EMNLP 2021). The dataset used are unaligned questions and passages from [MLQuestions dataset](https://github.com/McGill-NLP/MLQuestions/tree/main/data).
## Details of Self-Training
The Self-Training algorithm was presented as a baseline algorithm to compete with proposed Back-Training in [Back-Training excels Self-Training at Unsupervised Domain Adaptation
of Question Generation and Passage Retrieval](https://arxiv.org/pdf/2104.08801.pdf) by *Devang Kulshreshtha, Robert Belfer, Iulian Vlad Serban, Siva Reddy* in Here the abstract:
In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA) from source to target domain. While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between the target domain and synthetic data distribution, and reduces model overfitting to the source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU4 points on generation, and 17.6% top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation datasetMLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.
## Model training 🏋️
The training script can be found [here](https://github.com/McGill-NLP/MLQuestions/blob/main/UDA-SelfTraining.sh)
## Model in Action 🚀
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
#Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("geekydevu/bart-qg-mlquestions-selftraining")
#Load the model
model = AutoModelForSeq2SeqLM.from_pretrained("geekydevu/bart-qg-mlquestions-selftraining")
```
## Citation
If you want to cite this model you can use this:
```bibtex
@inproceedings{kulshreshtha-etal-2021-back,
title = "Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval",
author = "Kulshreshtha, Devang and
Belfer, Robert and
Serban, Iulian Vlad and
Reddy, Siva",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.566",
pages = "7064--7078",
abstract = "In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA). While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between target domain and synthetic data distribution, and reduces model overfitting to source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU-4 points on generation, and 17.6{\%} top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation dataset - MLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.",
}
```
> Created by [Devang Kulshreshtha](https://geekydevu.netlify.app/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
Kuray107/ls-timit-wsj0-swbd-100percent-supervised-aug | 4027670834db5364e22135b4a1f079bbe56c9cf9 | 2022-04-14T07:42:13.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Kuray107 | null | Kuray107/ls-timit-wsj0-swbd-100percent-supervised-aug | 1 | null | transformers | 31,205 | Entry not found |
CenIA/albert-tiny-spanish-finetuned-qa-sqac | 13ebf761e69ed9d9ee08da1a8e931041229cc571 | 2022-04-13T02:14:52.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/albert-tiny-spanish-finetuned-qa-sqac | 1 | null | transformers | 31,206 | Entry not found |
CenIA/albert-base-spanish-finetuned-qa-sqac | cd6c5a750b476c816712bfc633a5d7bf92ba972d | 2022-04-13T02:20:07.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/albert-base-spanish-finetuned-qa-sqac | 1 | null | transformers | 31,207 | Entry not found |
Wizounovziki/t5-small-devices-sum-ver3 | 80236f5b0fab14ea99dc1aa506e8596abc6ca426 | 2022-04-13T03:52:55.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Wizounovziki | null | Wizounovziki/t5-small-devices-sum-ver3 | 1 | null | transformers | 31,208 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-devices-sum-ver3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-devices-sum-ver3
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1325
- Rouge1: 95.6631
- Rouge2: 83.6149
- Rougel: 95.6622
- Rougelsum: 95.6632
- Gen Len: 4.9279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 467 | 0.3307 | 90.9817 | 74.3762 | 90.9596 | 90.9781 | 4.7527 |
| 1.0254 | 2.0 | 934 | 0.2365 | 92.6761 | 78.1252 | 92.6664 | 92.6682 | 4.8004 |
| 0.3526 | 3.0 | 1401 | 0.1904 | 93.8503 | 80.4523 | 93.8286 | 93.8338 | 4.8221 |
| 0.2643 | 4.0 | 1868 | 0.1638 | 94.8079 | 82.1779 | 94.7815 | 94.7853 | 4.917 |
| 0.2075 | 5.0 | 2335 | 0.1503 | 95.1619 | 82.6284 | 95.1533 | 95.1578 | 4.9263 |
| 0.1831 | 6.0 | 2802 | 0.1408 | 95.2357 | 82.8152 | 95.2261 | 95.2263 | 4.9287 |
| 0.161 | 7.0 | 3269 | 0.1386 | 95.4993 | 83.2609 | 95.4935 | 95.4933 | 4.9269 |
| 0.1589 | 8.0 | 3736 | 0.1344 | 95.6363 | 83.4727 | 95.6304 | 95.632 | 4.9309 |
| 0.1517 | 9.0 | 4203 | 0.1330 | 95.6702 | 83.6329 | 95.6669 | 95.6736 | 4.9301 |
| 0.1436 | 10.0 | 4670 | 0.1325 | 95.6631 | 83.6149 | 95.6622 | 95.6632 | 4.9279 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
gary109/wav2vec2-base-mirst500-ac | 1f1d6307645715effc0701f201cdf9c773a4178f | 2022-04-13T07:30:40.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"dataset:mir_st500",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | gary109 | null | gary109/wav2vec2-base-mirst500-ac | 1 | null | transformers | 31,209 | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- mir_st500
metrics:
- accuracy
model-index:
- name: wav2vec2-base-mirst500-ac
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-mirst500-ac
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the /workspace/datasets/datasets/MIR_ST500/MIR_ST500.py dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7566
- Accuracy: 0.7570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 1
- seed: 0
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.3718 | 1.0 | 1304 | 1.4422 | 0.4255 |
| 1.1285 | 2.0 | 2608 | 1.1061 | 0.5869 |
| 1.0275 | 3.0 | 3912 | 0.8825 | 0.6724 |
| 0.9982 | 4.0 | 5216 | 0.9181 | 0.6713 |
| 0.9482 | 5.0 | 6520 | 0.8717 | 0.6971 |
| 0.8687 | 6.0 | 7824 | 0.8041 | 0.7164 |
| 0.8841 | 7.0 | 9128 | 0.8869 | 0.7034 |
| 0.8094 | 8.0 | 10432 | 0.8216 | 0.7172 |
| 0.7733 | 9.0 | 11736 | 0.8018 | 0.7298 |
| 0.7892 | 10.0 | 13040 | 0.7517 | 0.7426 |
| 0.8736 | 11.0 | 14344 | 0.7482 | 0.7482 |
| 0.7035 | 12.0 | 15648 | 0.7730 | 0.7488 |
| 0.7361 | 13.0 | 16952 | 0.7677 | 0.7510 |
| 0.7808 | 14.0 | 18256 | 0.7765 | 0.7512 |
| 0.7359 | 15.0 | 19560 | 0.7566 | 0.7570 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Davlan/afro-xlmr-mini | ef581abbc893df85e8b7f8037e713eceb94233ab | 2022-04-15T14:33:50.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"arxiv:2204.06487",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/afro-xlmr-mini | 1 | null | transformers | 31,210 | ---
license: afl-3.0
---
# afro-xlmr-mini
AfroXLMR-mini was created by MLM adaptation of [XLM-R-miniLM](https://huggingface.co/nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large) model on 17 African languages (Afrikaans, Amharic, Hausa, Igbo, Malagasy, Chichewa, Oromo, Naija, Kinyarwanda, Kirundi, Shona, Somali, Sesotho, Swahili, isiXhosa, Yoruba, and isiZulu) covering the major African language families and 3 high resource languages (Arabic, French, and English).
## Eval results on MasakhaNER (F-score)
language| XLM-R-miniLM| XLM-R-base |XLM-R-large| afro-xlmr-base | afro-xlmr-small | afro-xlmr-mini
-|-|-|-|-|-|-
amh |69.5|70.6|76.2|76.1|70.1|69.7
hau |74.5|89.5|90.5|91.2|91.4|87.7
ibo |81.9|84.8|84.1|87.4|86.6|83.5
kin |68.6|73.3|73.8|78.0|77.5|74.1
lug |64.7|79.7|81.6|82.9|83.2|77.4
luo |11.7|74.9|73.6|75.1|75.4|17.5
pcm |83.2|87.3|89.0|89.6|89.0|85.5
swa |86.3|87.4|89.4|88.6|88.7|86.0
wol |51.7|63.9|67.9|67.4|65.9|59.0
yor |72.0|78.3|78.9|82.1|81.3|75.1
### BibTeX entry and citation info
```
@misc{afro_maft,
doi = {10.48550/ARXIV.2204.06487},
url = {https://arxiv.org/abs/2204.06487},
author = {Alabi, Jesujoba O. and Adelani, David Ifeoluwa and Mosbach, Marius and Klakow, Dietrich},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Multilingual Language Model Adaptive Fine-Tuning: A Study on African Languages},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
cosmo/distilbert-base-uncased-finetuned-squad | 411133867a353d70556fb210645d5aa2b770d126 | 2022-04-22T07:14:22.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | cosmo | null | cosmo/distilbert-base-uncased-finetuned-squad | 1 | null | transformers | 31,211 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
veddm/all-distilroberta-v1-finetuned-DIT-10_epochs | b00eaa8e4297c1a9ef191f3a0f239051863b20b9 | 2022-04-13T16:31:00.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | veddm | null | veddm/all-distilroberta-v1-finetuned-DIT-10_epochs | 1 | null | transformers | 31,212 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: all-distilroberta-v1-finetuned-DIT-10_epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-distilroberta-v1-finetuned-DIT-10_epochs
This model is a fine-tuned version of [sentence-transformers/all-distilroberta-v1](https://huggingface.co/sentence-transformers/all-distilroberta-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 358 | 0.0196 |
| 0.3013 | 2.0 | 716 | 0.0092 |
| 0.0073 | 3.0 | 1074 | 0.0065 |
| 0.0073 | 4.0 | 1432 | 0.0054 |
| 0.0021 | 5.0 | 1790 | 0.0051 |
| 0.0007 | 6.0 | 2148 | 0.0047 |
| 0.0004 | 7.0 | 2506 | 0.0047 |
| 0.0004 | 8.0 | 2864 | 0.0046 |
| 0.0004 | 9.0 | 3222 | 0.0044 |
| 0.0003 | 10.0 | 3580 | 0.0044 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Gam/roberta-base-finetuned-cuad | 75368cf50a7a8640bc5eb48e5753aeb618f5c67e | 2022-04-13T13:11:38.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Gam | null | Gam/roberta-base-finetuned-cuad | 1 | null | transformers | 31,213 | Entry not found |
ales/wav2vec2-cv-be | 2d73dd6d07fd1438e7ecf0fe8ee1cbfd326e5184 | 2022-04-13T21:33:15.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"be",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"audio",
"speech",
"license:gpl-3.0",
"model-index"
] | automatic-speech-recognition | false | ales | null | ales/wav2vec2-cv-be | 1 | null | transformers | 31,214 | ---
license: gpl-3.0
language:
- be
tags:
- audio
- speech
- automatic-speech-recognition
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
model-index:
- name: wav2vec2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: be
metrics:
- name: Dev WER
type: wer
value: 17.61
- name: Test WER
type: wer
value: 18.7
- name: Dev WER (with LM)
type: wer
value: 11.5
- name: Test WER (with LM)
type: wer
value: 12.4
---
# Automatic Speech Recognition for Belarusian language
Fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on `mozilla-foundation/common_voice_8_0 be` dataset.
`Train`, `Dev`, `Test` splits were used as they are present in the dataset. No additional data was used from `Validated` split,
only 1 voicing of each sentence was used - the way the data was split by [CommonVoice CorporaCreator](https://github.com/common-voice/CorporaCreator).
To build a better model **one can use additional voicings from `Validated` split** for sentences already present in `Train`, `Dev`, `Test` splits,
i.e. enlarge mentioned splits.
Language model was built using [KenLM](https://kheafield.com/code/kenlm/estimation/).
5-gram Language model was built on sentences from `Train + (Other - Dev - Test)` splits of `mozilla-foundation/common_voice_8_0 be` dataset.
Source code is available [here](https://github.com/yks72p/stt_be).
## Run model in a browser
This page contains interactive demo widget that lets you test this model right in a browser.
However, this widget uses Acoustic model only **without** Language model that significantly improves overall performance.
You can play with **full pipeline of Acoustic model + Language model** on the following [spaces page](https://huggingface.co/spaces/ales/wav2vec2-cv-be-lm)
(also works from browser).
|
Chikashi/t5-small-finetuned-cnndm_wikihow_test_on_cnndm | 4f1049c65a5d5af5395130ca5d204a1e4d98e87d | 2022-04-13T13:57:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Chikashi | null | Chikashi/t5-small-finetuned-cnndm_wikihow_test_on_cnndm | 1 | null | transformers | 31,215 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-cnndm_wikihow_test_on_cnndm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm_wikihow_test_on_cnndm
This model is a fine-tuned version of [Chikashi/t5-small-finetuned-cnndm-wikihow](https://huggingface.co/Chikashi/t5-small-finetuned-cnndm-wikihow) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.1
|
Helsinki-NLP/opus-mt-tc-big-en-bg | 153a411055fbe771cf5930d1faf0e1bd3426baa4 | 2022-06-01T13:04:25.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bg",
"en",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-en-bg | 1 | null | transformers | 31,216 | ---
language:
- bg
- en
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-bg
results:
- task:
name: Translation eng-bul
type: translation
args: eng-bul
dataset:
name: flores101-devtest
type: flores_101
args: eng bul devtest
metrics:
- name: BLEU
type: bleu
value: 44.9
- task:
name: Translation eng-bul
type: translation
args: eng-bul
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-bul
metrics:
- name: BLEU
type: bleu
value: 51.5
---
# opus-mt-tc-big-en-bg
Neural machine translation model for translating from English (en) to Bulgarian (bg).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-02-25
* source language(s): eng
* target language(s): bul
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-02-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opusTCv20210807+bt_transformer-big_2022-02-25.zip)
* more information released models: [OPUS-MT eng-bul README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bul/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"2001 is the year when the 21st century begins.",
"This is Copacabana!"
]
model_name = "pytorch-models/opus-mt-tc-big-en-bg"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# 2001 е годината, в която започва 21-ви век.
# Това е Копакабана!
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-bg")
print(pipe("2001 is the year when the 21st century begins."))
# expected output: 2001 е годината, в която започва 21-ви век.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-02-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opusTCv20210807+bt_transformer-big_2022-02-25.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| eng-bul | tatoeba-test-v2021-08-07 | 0.68987 | 51.5 | 10000 | 69504 |
| eng-bul | flores101-devtest | 0.69891 | 44.9 | 1012 | 24700 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 16:29:32 EEST 2022
* port machine: LM0-400-22516.local
|
CenIA/albert-xlarge-spanish-finetuned-qa-sqac | d6d477d51ac36d8d5105c537979d7194671e93a7 | 2022-04-13T13:55:08.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/albert-xlarge-spanish-finetuned-qa-sqac | 1 | null | transformers | 31,217 | Entry not found |
Helsinki-NLP/opus-mt-tc-big-en-et | de9f19aa1c172bc4e56a07ed639ffc66505e0801 | 2022-06-01T13:02:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"et",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-en-et | 1 | null | transformers | 31,218 | ---
language:
- en
- et
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-et
results:
- task:
name: Translation eng-est
type: translation
args: eng-est
dataset:
name: flores101-devtest
type: flores_101
args: eng est devtest
metrics:
- name: BLEU
type: bleu
value: 28.3
- task:
name: Translation eng-est
type: translation
args: eng-est
dataset:
name: newsdev2018
type: newsdev2018
args: eng-est
metrics:
- name: BLEU
type: bleu
value: 25.2
- task:
name: Translation eng-est
type: translation
args: eng-est
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-est
metrics:
- name: BLEU
type: bleu
value: 53.4
- task:
name: Translation eng-est
type: translation
args: eng-est
dataset:
name: newstest2018
type: wmt-2018-news
args: eng-est
metrics:
- name: BLEU
type: bleu
value: 26.7
---
# opus-mt-tc-big-en-et
Neural machine translation model for translating from English (en) to Estonian (et).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-13
* source language(s): eng
* target language(s): est
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-est/opusTCv20210807+bt_transformer-big_2022-03-13.zip)
* more information released models: [OPUS-MT eng-est README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-est/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>est<< A cab is waiting.",
">>vro<< Where do you live?"
]
model_name = "pytorch-models/opus-mt-tc-big-en-et"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Takso ootab.
# Kus sa elad?
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-et")
print(pipe(">>est<< A cab is waiting."))
# expected output: Takso ootab.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-est/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-est/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| eng-est | tatoeba-test-v2021-08-07 | 0.71255 | 53.4 | 1359 | 7992 |
| eng-est | flores101-devtest | 0.61306 | 28.3 | 1012 | 19788 |
| eng-est | newsdev2018 | 0.57225 | 25.2 | 2000 | 34492 |
| eng-est | newstest2018 | 0.58540 | 26.7 | 2000 | 36269 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 17:00:19 EEST 2022
* port machine: LM0-400-22516.local
|
Gam/roberta-base-finetuned-cuad-gam | 93721767dccf4dc83e9b2acb7e34814d6cc6bee8 | 2022-04-13T15:21:51.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Gam | null | Gam/roberta-base-finetuned-cuad-gam | 1 | null | transformers | 31,219 | Entry not found |
Helsinki-NLP/opus-mt-tc-big-en-gmq | ad0ea37d1c8081d4c65da7f5a3ab1b3b7f85fa11 | 2022-06-01T13:03:00.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tc",
"big",
"en",
"gmq",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-en-gmq | 1 | 1 | transformers | 31,220 | ---
language:
- da
- en
- fo
- gmq
- is
- nb
- nn
- false
- sv
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-gmq
results:
- task:
name: Translation eng-dan
type: translation
args: eng-dan
dataset:
name: flores101-devtest
type: flores_101
args: eng dan devtest
metrics:
- name: BLEU
type: bleu
value: 47.7
- task:
name: Translation eng-isl
type: translation
args: eng-isl
dataset:
name: flores101-devtest
type: flores_101
args: eng isl devtest
metrics:
- name: BLEU
type: bleu
value: 24.1
- task:
name: Translation eng-nob
type: translation
args: eng-nob
dataset:
name: flores101-devtest
type: flores_101
args: eng nob devtest
metrics:
- name: BLEU
type: bleu
value: 34.5
- task:
name: Translation eng-swe
type: translation
args: eng-swe
dataset:
name: flores101-devtest
type: flores_101
args: eng swe devtest
metrics:
- name: BLEU
type: bleu
value: 46.9
- task:
name: Translation eng-isl
type: translation
args: eng-isl
dataset:
name: newsdev2021.en-is
type: newsdev2021.en-is
args: eng-isl
metrics:
- name: BLEU
type: bleu
value: 22.6
- task:
name: Translation eng-dan
type: translation
args: eng-dan
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-dan
metrics:
- name: BLEU
type: bleu
value: 61.6
- task:
name: Translation eng-isl
type: translation
args: eng-isl
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-isl
metrics:
- name: BLEU
type: bleu
value: 39.9
- task:
name: Translation eng-nno
type: translation
args: eng-nno
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-nno
metrics:
- name: BLEU
type: bleu
value: 40.1
- task:
name: Translation eng-nob
type: translation
args: eng-nob
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-nob
metrics:
- name: BLEU
type: bleu
value: 57.3
- task:
name: Translation eng-swe
type: translation
args: eng-swe
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-swe
metrics:
- name: BLEU
type: bleu
value: 60.9
- task:
name: Translation eng-isl
type: translation
args: eng-isl
dataset:
name: newstest2021.en-is
type: wmt-2021-news
args: eng-isl
metrics:
- name: BLEU
type: bleu
value: 21.5
---
# opus-mt-tc-big-en-gmq
Neural machine translation model for translating from English (en) to North Germanic languages (gmq).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-17
* source language(s): eng
* target language(s): dan fao isl nno nob nor swe
* valid target language labels: >>dan<< >>fao<< >>isl<< >>nno<< >>nob<< >>nor<< >>swe<<
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opusTCv20210807+bt_transformer-big_2022-03-17.zip)
* more information released models: [OPUS-MT eng-gmq README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmq/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>dan<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>nno<< The United States borders Canada.",
">>nob<< This is the biggest hotel in this city."
]
model_name = "pytorch-models/opus-mt-tc-big-en-gmq"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# USA grensar til Canada.
# Dette er det største hotellet i denne byen.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-gmq")
print(pipe(">>nno<< The United States borders Canada."))
# expected output: USA grensar til Canada.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opusTCv20210807+bt_transformer-big_2022-03-17.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opusTCv20210807+bt_transformer-big_2022-03-17.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| eng-dan | tatoeba-test-v2021-08-07 | 0.75165 | 61.6 | 10795 | 79385 |
| eng-fao | tatoeba-test-v2021-08-07 | 0.40395 | 18.3 | 294 | 1933 |
| eng-isl | tatoeba-test-v2021-08-07 | 0.59731 | 39.9 | 2503 | 19023 |
| eng-nno | tatoeba-test-v2021-08-07 | 0.61271 | 40.1 | 460 | 3428 |
| eng-nob | tatoeba-test-v2021-08-07 | 0.72380 | 57.3 | 4539 | 36119 |
| eng-swe | tatoeba-test-v2021-08-07 | 0.74197 | 60.9 | 10362 | 68067 |
| eng-dan | flores101-devtest | 0.70810 | 47.7 | 1012 | 24638 |
| eng-isl | flores101-devtest | 0.52076 | 24.1 | 1012 | 22834 |
| eng-nob | flores101-devtest | 0.62760 | 34.5 | 1012 | 23873 |
| eng-swe | flores101-devtest | 0.70129 | 46.9 | 1012 | 23121 |
| eng-isl | newsdev2021.en-is | 0.50376 | 22.6 | 2004 | 43721 |
| eng-isl | newstest2021.en-is | 0.50516 | 21.5 | 1000 | 25233 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 17:14:46 EEST 2022
* port machine: LM0-400-22516.local
|
Gam/distilbert-base-uncased-finetuned-cuad-distilbert | 16a6f8e61a121c3958fd35b2586cb74d96095096 | 2022-04-13T16:42:58.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Gam | null | Gam/distilbert-base-uncased-finetuned-cuad-distilbert | 1 | null | transformers | 31,221 | Entry not found |
huggingtweets/notthatsuperman | e37d43452f4c183eb97de36aedc5470f7d207c8b | 2022-04-24T22:13:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/notthatsuperman | 1 | null | transformers | 31,222 | ---
language: en
thumbnail: http://www.huggingtweets.com/notthatsuperman/1650838396576/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1518349985649246211/cSRbyu-Y_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">NotThatSuperman</div>
<div style="text-align: center; font-size: 14px;">@notthatsuperman</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from NotThatSuperman.
| Data | NotThatSuperman |
| --- | --- |
| Tweets downloaded | 3198 |
| Retweets | 288 |
| Short tweets | 851 |
| Tweets kept | 2059 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2le2bshi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @notthatsuperman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3jdmiehf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3jdmiehf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/notthatsuperman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
DioLiu/distilroberta-base-Ctr3 | 10692bbebe5bfac124167d505d397bd167121df8 | 2022-04-14T03:48:04.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | DioLiu | null | DioLiu/distilroberta-base-Ctr3 | 1 | null | transformers | 31,223 | Entry not found |
eleldar/marian-finetuned-kde4-en-to-fr | 351534b11054661cdc0f2713d74b607e2b6fe5a3 | 2022-04-13T17:28:00.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | eleldar | null | eleldar/marian-finetuned-kde4-en-to-fr | 1 | null | transformers | 31,224 | Entry not found |
masakhane/afrimt5_fr_bbj_news | 20cfadf04b005f858995e00f8ac597a1ecae39c9 | 2022-04-13T18:28:29.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afrimt5_fr_bbj_news | 1 | null | transformers | 31,225 | ---
license: afl-3.0
---
|
masakhane/afrimbart_fr_bbj_news | 8043139a4614c28b14db81d3f96a5052ec1fbad5 | 2022-04-13T18:28:36.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afrimbart_fr_bbj_news | 1 | null | transformers | 31,226 | ---
license: afl-3.0
---
|
masakhane/afribyt5_fr_bbj_news | afcd8f6b0d2afe553d060b7287c4262901d7a394 | 2022-04-13T19:29:52.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afribyt5_fr_bbj_news | 1 | null | transformers | 31,227 | ---
license: afl-3.0
---
|
masakhane/mbart50_fr_bbj_news | 2fb9a87175e477e8deca70020cc7aab2a4373256 | 2022-04-13T20:41:10.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/mbart50_fr_bbj_news | 1 | null | transformers | 31,228 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_bbj_fr_news | 37c26c1e843a4b49b2bd9777ba5970e615c0a538 | 2022-04-13T21:40:11.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_bbj_fr_news | 1 | null | transformers | 31,229 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_bbj_fr_rel_news_ft | 02122732b8df4f8d7f9830c34841fa50c98cae3f | 2022-04-14T08:42:42.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_bbj_fr_rel_news_ft | 1 | null | transformers | 31,230 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_bbj_fr_rel | 6bc1d62a701ff3be9fb27b0cabfd2b0c5d64b59d | 2022-04-14T08:42:52.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_bbj_fr_rel | 1 | null | transformers | 31,231 | ---
license: afl-3.0
---
|
atomsspawn/DialoGPT-medium-dumbledore | ebd0db71affe006197971331c37280de35cbdedc | 2022-04-13T17:33:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | atomsspawn | null | atomsspawn/DialoGPT-medium-dumbledore | 1 | null | transformers | 31,232 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
Gam/distilbert-base-uncased-finetuned-CUAD-IE | d0511f147434385de3690f9e139657f79e630588 | 2022-04-13T19:25:28.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Gam | null | Gam/distilbert-base-uncased-finetuned-CUAD-IE | 1 | 0 | transformers | 31,233 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-CUAD-IE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-CUAD-IE
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0149 | 1.0 | 33737 | 0.0108 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.12.1
|
flood/xlm-roberta-base-finetuned-panx-de | 38cbf8155777e9254196c2e9ff174e7602f41551 | 2022-06-10T04:39:15.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | flood | null | flood/xlm-roberta-base-finetuned-panx-de | 1 | null | transformers | 31,234 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8633935674508466
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1344
- F1: 0.8634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2588 | 1.0 | 525 | 0.1676 | 0.8194 |
| 0.1318 | 2.0 | 1050 | 0.1326 | 0.8513 |
| 0.084 | 3.0 | 1575 | 0.1344 | 0.8634 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Tianle/distilbert-base-uncased-finetuned-squad | 80d97b2c043772d1a1b5145bf1b7e44f227bed03 | 2022-04-14T18:59:38.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Tianle | null | Tianle/distilbert-base-uncased-finetuned-squad | 1 | null | transformers | 31,235 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2169
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2631 | 1.0 | 5533 | 1.2169 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Adrian/distilbert-base-uncased-finetuned-squad | 969ac093161bdb75c8bf1bf9d7344be1295c4621 | 2022-04-16T18:28:34.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Adrian | null | Adrian/distilbert-base-uncased-finetuned-squad | 1 | null | transformers | 31,236 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2114 | 1.0 | 5533 | 1.1509 |
| 0.9537 | 2.0 | 11066 | 1.1229 |
| 0.7459 | 3.0 | 16599 | 1.1484 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jekdoieao/wav2vec2-large-xls-r-300m-turkish-colab | 95942e31ac545f87ebe3bcca531e402a01212903 | 2022-04-14T02:33:42.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jekdoieao | null | jekdoieao/wav2vec2-large-xls-r-300m-turkish-colab | 1 | null | transformers | 31,237 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3731
- Wer: 0.3635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.967 | 3.67 | 400 | 0.6661 | 0.6756 |
| 0.3882 | 7.34 | 800 | 0.4310 | 0.4755 |
| 0.1828 | 11.01 | 1200 | 0.4146 | 0.4485 |
| 0.126 | 14.68 | 1600 | 0.4014 | 0.4254 |
| 0.0955 | 18.35 | 2000 | 0.4125 | 0.4040 |
| 0.0749 | 22.02 | 2400 | 0.3912 | 0.3960 |
| 0.0606 | 25.69 | 2800 | 0.3707 | 0.3771 |
| 0.0477 | 29.36 | 3200 | 0.3731 | 0.3635 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
gary109/wav2vec2-large-xlsr-53-MIR_ST500_ASR | 4b501dfda3389ab69be95f2a7a89cda44a2e05e9 | 2022-04-14T11:05:02.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:mir_st500",
"transformers",
"/workspace/datasets/datasets/MIR_ST500/MIR_ST500.py",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/wav2vec2-large-xlsr-53-MIR_ST500_ASR | 1 | null | transformers | 31,238 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- /workspace/datasets/datasets/MIR_ST500/MIR_ST500.py
- generated_from_trainer
datasets:
- mir_st500
model-index:
- name: wav2vec2-large-xlsr-53-MIR_ST500_ASR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-MIR_ST500_ASR
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the /WORKSPACE/DATASETS/DATASETS/MIR_ST500/MIR_ST500.PY - ASR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5180
- Wer: 0.5824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 8
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 56.764 | 0.13 | 100 | 24.4254 | 0.9990 |
| 7.5081 | 0.27 | 200 | 2.9111 | 1.0 |
| 3.4899 | 0.4 | 300 | 2.1361 | 1.0 |
| 2.4094 | 0.53 | 400 | 1.9088 | 1.0 |
| 2.6764 | 0.67 | 500 | 1.8543 | 1.0 |
| 3.3107 | 0.8 | 600 | 1.7979 | 1.0 |
| 2.2856 | 0.93 | 700 | 1.7571 | 1.0 |
| 1.856 | 1.07 | 800 | 1.7351 | 0.9648 |
| 1.8882 | 1.2 | 900 | 1.7181 | 0.9654 |
| 2.1731 | 1.33 | 1000 | 1.6736 | 0.9637 |
| 1.8252 | 1.46 | 1100 | 1.3468 | 0.9647 |
| 1.9092 | 1.6 | 1200 | 1.3302 | 0.9627 |
| 1.9435 | 1.73 | 1300 | 1.2428 | 0.9634 |
| 1.3027 | 1.86 | 1400 | 1.2133 | 0.9644 |
| 1.3438 | 2.0 | 1500 | 1.2002 | 0.9635 |
| 1.2161 | 2.13 | 1600 | 1.1901 | 0.9636 |
| 1.203 | 2.26 | 1700 | 1.1620 | 0.9616 |
| 1.1159 | 2.4 | 1800 | 1.1660 | 0.9598 |
| 1.1466 | 2.53 | 1900 | 1.2089 | 0.9605 |
| 1.0563 | 2.66 | 2000 | 1.1732 | 0.9603 |
| 1.1019 | 2.8 | 2100 | 1.1468 | 0.9612 |
| 1.029 | 2.93 | 2200 | 1.1188 | 0.9622 |
| 1.0079 | 3.06 | 2300 | 1.0604 | 0.9617 |
| 1.0483 | 3.2 | 2400 | 1.0499 | 0.9612 |
| 0.9892 | 3.33 | 2500 | 1.0292 | 0.9606 |
| 0.9556 | 3.46 | 2600 | 1.0228 | 0.9604 |
| 0.9626 | 3.6 | 2700 | 1.0028 | 0.9617 |
| 1.0537 | 3.73 | 2800 | 1.0051 | 0.9608 |
| 1.0648 | 3.86 | 2900 | 0.9723 | 0.9604 |
| 0.8657 | 3.99 | 3000 | 0.9620 | 0.9605 |
| 0.8964 | 4.13 | 3100 | 1.0432 | 0.9612 |
| 0.9639 | 4.26 | 3200 | 0.9322 | 0.9589 |
| 0.8965 | 4.39 | 3300 | 0.9091 | 0.9559 |
| 0.8257 | 4.53 | 3400 | 0.8999 | 0.9499 |
| 0.8002 | 4.66 | 3500 | 0.8754 | 0.9554 |
| 0.7335 | 4.79 | 3600 | 0.8608 | 0.9572 |
| 0.936 | 4.93 | 3700 | 0.8564 | 0.9510 |
| 0.8185 | 5.06 | 3800 | 0.8890 | 0.9517 |
| 0.7422 | 5.19 | 3900 | 0.8262 | 0.9392 |
| 0.7784 | 5.33 | 4000 | 0.8292 | 0.9259 |
| 0.8123 | 5.46 | 4100 | 0.8180 | 0.9374 |
| 0.7256 | 5.59 | 4200 | 0.8158 | 0.9077 |
| 0.7638 | 5.73 | 4300 | 0.8423 | 0.9170 |
| 0.6737 | 5.86 | 4400 | 0.7818 | 0.9182 |
| 0.7644 | 5.99 | 4500 | 0.7692 | 0.8934 |
| 0.7911 | 6.13 | 4600 | 0.7627 | 0.8978 |
| 0.6922 | 6.26 | 4700 | 0.7627 | 0.8906 |
| 0.7369 | 6.39 | 4800 | 0.7570 | 0.8838 |
| 0.6642 | 6.52 | 4900 | 0.9476 | 0.8953 |
| 0.7502 | 6.66 | 5000 | 0.7336 | 0.8955 |
| 0.6243 | 6.79 | 5100 | 0.7583 | 0.8896 |
| 0.6912 | 6.92 | 5200 | 0.7764 | 0.8761 |
| 0.7744 | 7.06 | 5300 | 0.7615 | 0.8790 |
| 0.6195 | 7.19 | 5400 | 0.7114 | 0.8712 |
| 0.7418 | 7.32 | 5500 | 0.8314 | 0.8864 |
| 0.7658 | 7.46 | 5600 | 0.8531 | 0.8718 |
| 0.6821 | 7.59 | 5700 | 0.9068 | 0.8786 |
| 0.6931 | 7.72 | 5800 | 0.7549 | 0.8645 |
| 0.6771 | 7.86 | 5900 | 0.7138 | 0.8442 |
| 0.6735 | 7.99 | 6000 | 0.6947 | 0.8493 |
| 0.6427 | 8.12 | 6100 | 0.6997 | 0.8475 |
| 0.6988 | 8.26 | 6200 | 0.6814 | 0.8098 |
| 0.6726 | 8.39 | 6300 | 0.6656 | 0.8259 |
| 0.6247 | 8.52 | 6400 | 0.6438 | 0.8314 |
| 0.5101 | 8.66 | 6500 | 0.6323 | 0.8446 |
| 0.5325 | 8.79 | 6600 | 0.6305 | 0.8413 |
| 0.5918 | 8.92 | 6700 | 0.6353 | 0.8076 |
| 0.617 | 9.05 | 6800 | 0.6544 | 0.8118 |
| 0.4861 | 9.19 | 6900 | 0.6174 | 0.8429 |
| 0.6396 | 9.32 | 7000 | 0.6140 | 0.8117 |
| 0.436 | 9.45 | 7100 | 0.6148 | 0.7887 |
| 0.6141 | 9.59 | 7200 | 0.6133 | 0.8007 |
| 0.5781 | 9.72 | 7300 | 0.6135 | 0.8211 |
| 0.52 | 9.85 | 7400 | 0.6155 | 0.8042 |
| 0.6681 | 9.99 | 7500 | 0.6074 | 0.7843 |
| 0.5004 | 10.12 | 7600 | 0.5950 | 0.8035 |
| 0.4993 | 10.25 | 7700 | 0.5888 | 0.7710 |
| 0.4768 | 10.39 | 7800 | 0.5922 | 0.7633 |
| 0.4535 | 10.52 | 7900 | 0.5906 | 0.8030 |
| 0.517 | 10.65 | 8000 | 0.5875 | 0.7823 |
| 0.5894 | 10.79 | 8100 | 0.5882 | 0.7932 |
| 0.6005 | 10.92 | 8200 | 0.5798 | 0.7922 |
| 0.4284 | 11.05 | 8300 | 0.5775 | 0.7701 |
| 0.5163 | 11.19 | 8400 | 0.5715 | 0.7592 |
| 0.4701 | 11.32 | 8500 | 0.5955 | 0.7485 |
| 0.5152 | 11.45 | 8600 | 0.6041 | 0.6914 |
| 0.4442 | 11.58 | 8700 | 0.5614 | 0.7439 |
| 0.4451 | 11.72 | 8800 | 0.5619 | 0.7033 |
| 0.4433 | 11.85 | 8900 | 0.5562 | 0.7246 |
| 0.4799 | 11.98 | 9000 | 0.5834 | 0.7040 |
| 0.4832 | 12.12 | 9100 | 0.5902 | 0.7349 |
| 0.523 | 12.25 | 9200 | 0.5562 | 0.7326 |
| 0.4419 | 12.38 | 9300 | 0.5472 | 0.7326 |
| 0.437 | 12.52 | 9400 | 0.5466 | 0.7100 |
| 0.4797 | 12.65 | 9500 | 0.5470 | 0.6698 |
| 0.3971 | 12.78 | 9600 | 0.5437 | 0.6835 |
| 0.5254 | 12.92 | 9700 | 0.5385 | 0.6747 |
| 0.5046 | 13.05 | 9800 | 0.5330 | 0.6554 |
| 0.4692 | 13.18 | 9900 | 0.5305 | 0.6527 |
| 0.4305 | 13.32 | 10000 | 0.5292 | 0.6314 |
| 0.6132 | 13.45 | 10100 | 0.5405 | 0.6028 |
| 0.4741 | 13.58 | 10200 | 0.5311 | 0.6207 |
| 0.398 | 13.72 | 10300 | 0.5320 | 0.6261 |
| 0.458 | 13.85 | 10400 | 0.5240 | 0.6242 |
| 0.4154 | 13.98 | 10500 | 0.5262 | 0.6215 |
| 0.3702 | 14.11 | 10600 | 0.5206 | 0.6136 |
| 0.427 | 14.25 | 10700 | 0.5231 | 0.6289 |
| 0.4307 | 14.38 | 10800 | 0.5210 | 0.5908 |
| 0.4738 | 14.51 | 10900 | 0.5211 | 0.5826 |
| 0.5522 | 14.65 | 11000 | 0.5193 | 0.5886 |
| 0.4717 | 14.78 | 11100 | 0.5194 | 0.5907 |
| 0.4819 | 14.91 | 11200 | 0.5178 | 0.5870 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
pitiwat/argument_wangchanberta2 | 52749037b6c9e51fc3600b316db406541a0335fc | 2022-04-17T02:59:58.000Z | [
"pytorch",
"camembert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | pitiwat | null | pitiwat/argument_wangchanberta2 | 1 | null | transformers | 31,239 | ---
widget:
- text: "ฉัน ชอบ หมา เพราะ มัน น่ารัก"
--- |
florentiino/DialoGPT-small-rick | a60d607dedf5c98de60c11afacf89a779299ef5e | 2022-04-14T15:24:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | florentiino | null | florentiino/DialoGPT-small-rick | 1 | null | transformers | 31,240 | ---
tags:
- conversational
---
# My Awesome Model that talks like Rick but thinks that your name is Morty
|
NeuralNotwork/gpt2-baseline | 73da02758a1ecb69c5d957450f0b9f38288cc912 | 2022-04-14T14:42:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | NeuralNotwork | null | NeuralNotwork/gpt2-baseline | 1 | null | transformers | 31,241 | Entry not found |
lilitket/20220414-150333 | 214b1694655aa34b87414c8fd8cff1e6421e6a45 | 2022-04-14T15:16:43.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220414-150333 | 1 | null | transformers | 31,242 | Entry not found |
Chikashi/t5-small-finetuned-cnndm1-wikihow0 | c1f9b57a4c2ba75074597059b9a354a3ff63d4ab | 2022-04-14T23:28:23.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Chikashi | null | Chikashi/t5-small-finetuned-cnndm1-wikihow0 | 1 | null | transformers | 31,243 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm1-wikihow0
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.6116
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm1-wikihow0
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6436
- Rouge1: 24.6116
- Rouge2: 11.8788
- Rougel: 20.3665
- Rougelsum: 23.2474
- Gen Len: 18.9998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8208 | 1.0 | 71779 | 1.6436 | 24.6116 | 11.8788 | 20.3665 | 23.2474 | 18.9998 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mizoru/wav2vec2-large-xls-r-300m-chuvash-colab | 5db8bed8c12567bf401540983dc86868c3a680d1 | 2022-06-09T00:19:07.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | mizoru | null | mizoru/wav2vec2-large-xls-r-300m-chuvash-colab | 1 | null | transformers | 31,244 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-chuvash-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-chuvash-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6998
- eval_wer: 0.7356
- eval_runtime: 233.6193
- eval_samples_per_second: 3.373
- eval_steps_per_second: 0.424
- epoch: 9.75
- step: 400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
milyiyo/stog-t5-small | eb42b873d4f47e465d845783791f5c486293ec36 | 2022-04-14T20:32:23.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:web_nlg",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | milyiyo | null | milyiyo/stog-t5-small | 1 | null | transformers | 31,245 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- web_nlg
model-index:
- name: stog-t5-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stog-t5-small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the web_nlg dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.12 | 100 | 0.4625 |
| No log | 0.24 | 200 | 0.3056 |
| No log | 0.36 | 300 | 0.2393 |
| No log | 0.48 | 400 | 0.1999 |
| No log | 0.61 | 500 | 0.1740 |
| No log | 0.73 | 600 | 0.1562 |
| No log | 0.85 | 700 | 0.1467 |
| No log | 0.97 | 800 | 0.1418 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
omicron1100/dummy-model | 12cedd9be492b4f401bad13d6f8ea899cbcd010c | 2022-04-14T22:50:01.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | omicron1100 | null | omicron1100/dummy-model | 1 | null | transformers | 31,246 | Entry not found |
repro-rights-amicus-briefs/bert-base-uncased-2-finetuned-RRamicus | bfce8bfef84ffc03cf900038aa6691c32dcb64a3 | 2022-04-15T02:04:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | repro-rights-amicus-briefs | null | repro-rights-amicus-briefs/bert-base-uncased-2-finetuned-RRamicus | 1 | null | transformers | 31,247 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-2-finetuned-RRamicus
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-2-finetuned-RRamicus
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 928
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.0341 | 1.0 | 1113 | 1.7515 |
| 1.7881 | 2.0 | 2226 | 1.6616 |
| 1.697 | 3.0 | 3339 | 1.6061 |
| 1.6328 | 4.0 | 4452 | 1.5662 |
| 1.5919 | 5.0 | 5565 | 1.5362 |
| 1.5602 | 6.0 | 6678 | 1.5193 |
| 1.5221 | 7.0 | 7791 | 1.4984 |
| 1.5135 | 8.0 | 8904 | 1.4898 |
| 1.4917 | 9.0 | 10017 | 1.4755 |
| 1.4859 | 10.0 | 11130 | 1.4671 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Chikashi/t5-small-finetuned-cnndm1-wikihow1 | eb69a11b38b19333bb5dcb8449526f2e5bf9c094 | 2022-04-15T03:46:59.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wikihow",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Chikashi | null | Chikashi/t5-small-finetuned-cnndm1-wikihow1 | 1 | null | transformers | 31,248 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikihow
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm1-wikihow1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wikihow
type: wikihow
args: all
metrics:
- name: Rouge1
type: rouge
value: 26.6881
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm1-wikihow1
This model is a fine-tuned version of [Chikashi/t5-small-finetuned-cnndm1-wikihow0](https://huggingface.co/Chikashi/t5-small-finetuned-cnndm1-wikihow0) on the wikihow dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3727
- Rouge1: 26.6881
- Rouge2: 9.9589
- Rougel: 22.6828
- Rougelsum: 26.0203
- Gen Len: 18.4813
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.56 | 1.0 | 39313 | 2.3727 | 26.6881 | 9.9589 | 22.6828 | 26.0203 | 18.4813 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mikeluck/gpt2-wikitext2 | 7d5137191af421953e55bcc9ed39aa9c319cc649 | 2022-04-15T19:03:07.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | mikeluck | null | mikeluck/gpt2-wikitext2 | 1 | null | transformers | 31,249 | Entry not found |
Kuray107/3-datasets-100h-supervised-aug | ecb0dce6869d9e5f3b3c33683b4f4dfc92919495 | 2022-04-18T03:27:36.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Kuray107 | null | Kuray107/3-datasets-100h-supervised-aug | 1 | null | transformers | 31,250 | Entry not found |
Tuffy/DialoGPT-small-harrypotter | b4a807c3f9ee9fea674ca74eac7788c5925cf5c7 | 2022-04-15T04:43:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Tuffy | null | Tuffy/DialoGPT-small-harrypotter | 1 | null | transformers | 31,251 | ---
tags:
- conversational
---
# small-harrypotter |
PSW/bart-last-ut-pred | b239245076e47066e8327aa8ef9f0a5151dcc9fc | 2022-04-15T06:42:30.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/bart-last-ut-pred | 1 | null | transformers | 31,252 | Entry not found |
masakhane/afrimbart_ewe_fr_news | 13946992010e660836ee0add6aaffb419d011c65 | 2022-04-15T09:01:33.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afrimbart_ewe_fr_news | 1 | null | transformers | 31,253 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_fr_ewe_news | 70bf5fa2bbb46f167956d204dd0f011f3af54dd2 | 2022-04-15T13:28:03.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_fr_ewe_news | 1 | null | transformers | 31,254 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_fr_ewe_rel_news | bb96aaef587310ae1b408275351d832fd53fe5aa | 2022-04-15T13:28:06.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_fr_ewe_rel_news | 1 | null | transformers | 31,255 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_fr_ewe_rel_ft | ee7340a2e2c38c1db7590e66d1e57a45ec1fbb5c | 2022-04-15T16:27:54.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_fr_ewe_rel_ft | 1 | null | transformers | 31,256 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_fr_ewe_rel | 2aa7ea0de330ba6bb542231f8c8f6b1c992bfda1 | 2022-04-15T17:39:07.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_fr_ewe_rel | 1 | null | transformers | 31,257 | ---
license: afl-3.0
---
|
creynier/wav2vec2-base-swbd-turn-eos-long_utt_removed | 8fa7794bea7d12158e06324219e5f1bb1c439b2c | 2022-04-16T23:34:50.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | creynier | null | creynier/wav2vec2-base-swbd-turn-eos-long_utt_removed | 1 | null | transformers | 31,258 | Entry not found |
LenaSchmidt/no_need_to_name_this | 1fc66f121a8c8ea29d32ae059ee1eb538d0c2c13 | 2022-04-15T13:16:42.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | LenaSchmidt | null | LenaSchmidt/no_need_to_name_this | 1 | null | transformers | 31,259 | ---
tags:
- generated_from_trainer
model-index:
- name: no_need_to_name_this
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# no_need_to_name_this
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
darkie01213/tunixx.20 | 6adbff5246ba274d640af187e203f1a9a14e87ab | 2022-04-15T13:29:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | darkie01213 | null | darkie01213/tunixx.20 | 1 | null | transformers | 31,260 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: tunixx.20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tunixx.20
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6492
- Bleu: 62.3581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 420
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 0.6 | 100 | 2.1821 | 14.2209 |
| No log | 1.19 | 200 | 1.9019 | 17.6606 |
| No log | 1.79 | 300 | 1.6948 | 19.7423 |
| No log | 2.38 | 400 | 1.5505 | 23.6162 |
| 1.9238 | 2.98 | 500 | 1.4374 | 27.3088 |
| 1.9238 | 3.57 | 600 | 1.3460 | 31.0185 |
| 1.9238 | 4.17 | 700 | 1.2517 | 33.3477 |
| 1.9238 | 4.76 | 800 | 1.1763 | 33.9847 |
| 1.9238 | 5.36 | 900 | 1.1152 | 34.1613 |
| 1.3121 | 5.95 | 1000 | 1.0539 | 35.4759 |
| 1.3121 | 6.55 | 1100 | 1.0081 | 36.6102 |
| 1.3121 | 7.14 | 1200 | 0.9568 | 37.5106 |
| 1.3121 | 7.74 | 1300 | 0.9156 | 38.0362 |
| 1.3121 | 8.33 | 1400 | 0.8857 | 38.4678 |
| 1.0132 | 8.93 | 1500 | 0.8527 | 38.8540 |
| 1.0132 | 9.52 | 1600 | 0.8216 | 39.4236 |
| 1.0132 | 10.12 | 1700 | 0.7954 | 39.3181 |
| 1.0132 | 10.71 | 1800 | 0.7741 | 39.7601 |
| 1.0132 | 11.31 | 1900 | 0.7551 | 40.0916 |
| 0.8567 | 11.9 | 2000 | 0.7386 | 41.1072 |
| 0.8567 | 12.5 | 2100 | 0.7231 | 41.3821 |
| 0.8567 | 13.1 | 2200 | 0.7103 | 41.8838 |
| 0.8567 | 13.69 | 2300 | 0.6982 | 42.0218 |
| 0.8567 | 14.29 | 2400 | 0.6870 | 41.7599 |
| 0.7764 | 14.88 | 2500 | 0.6786 | 42.3989 |
| 0.7764 | 15.48 | 2600 | 0.6709 | 42.7624 |
| 0.7764 | 16.07 | 2700 | 0.6634 | 42.9174 |
| 0.7764 | 16.67 | 2800 | 0.6567 | 42.9174 |
| 0.7764 | 17.26 | 2900 | 0.6525 | 43.4440 |
| 0.7282 | 17.86 | 3000 | 0.6492 | 43.4440 |
| 0.7282 | 18.45 | 3100 | 0.6468 | 43.6901 |
| 0.7282 | 19.05 | 3200 | 0.6445 | 43.5582 |
| 0.7282 | 19.64 | 3300 | 0.6435 | 43.5582 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1
|
theResearchNinja/Cybonto-distilbert-base-uncased-finetuned-ner-FewNerd | 3c3defa86689860f2fda5abed42d582494d1a7b5 | 2022-04-15T18:49:40.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:few_nerd",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | theResearchNinja | null | theResearchNinja/Cybonto-distilbert-base-uncased-finetuned-ner-FewNerd | 1 | null | transformers | 31,261 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- few_nerd
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Cybonto-distilbert-base-uncased-finetuned-ner-FewNerd
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: few_nerd
type: few_nerd
args: supervised
metrics:
- name: Precision
type: precision
value: 0.7422259388187705
- name: Recall
type: recall
value: 0.7830368683449253
- name: F1
type: f1
value: 0.7620854216169805
- name: Accuracy
type: accuracy
value: 0.9386106950200795
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Cybonto-distilbert-base-uncased-finetuned-ner-FewNerd
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the few_nerd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2091
- Precision: 0.7422
- Recall: 0.7830
- F1: 0.7621
- Accuracy: 0.9386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1964 | 1.0 | 4118 | 0.1946 | 0.7302 | 0.7761 | 0.7525 | 0.9366 |
| 0.1685 | 2.0 | 8236 | 0.1907 | 0.7414 | 0.7776 | 0.7591 | 0.9384 |
| 0.145 | 3.0 | 12354 | 0.1967 | 0.7454 | 0.7816 | 0.7631 | 0.9388 |
| 0.1263 | 4.0 | 16472 | 0.2021 | 0.7402 | 0.7845 | 0.7617 | 0.9384 |
| 0.1114 | 5.0 | 20590 | 0.2091 | 0.7422 | 0.7830 | 0.7621 | 0.9386 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
enelpol/evalatin2022-pos-open | 5974883fd945b61ea8028e54b89b6011e15f5fb3 | 2022-04-15T21:01:46.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | enelpol | null | enelpol/evalatin2022-pos-open | 1 | null | transformers | 31,262 | Entry not found |
lilitket/20220415-210530 | 202d2a5b4e65ddb45abf0f2d8cdf80f10f72337b | 2022-04-18T15:33:06.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220415-210530 | 1 | null | transformers | 31,263 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: 20220415-210530
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20220415-210530
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-2b](https://huggingface.co/facebook/wav2vec2-xls-r-2b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6544
- Wer: 0.3881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 1200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:------:|:---------------:|:------:|
| 6.1495 | 2.27 | 200 | 2.4098 | 1.0 |
| 0.4347 | 4.54 | 400 | 1.4211 | 0.9914 |
| 0.2295 | 6.82 | 600 | 1.0229 | 0.9349 |
| 0.1349 | 9.09 | 800 | 1.0063 | 0.9228 |
| 0.1001 | 11.36 | 1000 | 1.0333 | 0.9197 |
| 0.0847 | 13.63 | 1200 | 0.9021 | 0.8725 |
| 0.0697 | 15.91 | 1400 | 0.9117 | 0.8779 |
| 0.0634 | 18.18 | 1600 | 0.9550 | 0.8725 |
| 0.0607 | 20.45 | 1800 | 0.9063 | 0.8303 |
| 0.0551 | 22.73 | 2000 | 0.8163 | 0.7956 |
| 0.0536 | 25.0 | 2200 | 0.7385 | 0.7235 |
| 0.0511 | 27.27 | 2400 | 0.7917 | 0.7215 |
| 0.0449 | 29.54 | 2600 | 0.7508 | 0.6938 |
| 0.0417 | 31.82 | 2800 | 0.6892 | 0.6775 |
| 0.0415 | 34.09 | 3000 | 0.7029 | 0.6790 |
| 0.0384 | 36.36 | 3200 | 0.6839 | 0.6895 |
| 0.0392 | 38.63 | 3400 | 0.7067 | 0.6872 |
| 0.0358 | 40.91 | 3600 | 0.7310 | 0.6763 |
| 0.0337 | 43.18 | 3800 | 0.7139 | 0.6548 |
| 0.0362 | 45.45 | 4000 | 0.6975 | 0.6427 |
| 0.0311 | 47.73 | 4200 | 0.7054 | 0.6412 |
| 0.0327 | 50.0 | 4400 | 0.6530 | 0.6151 |
| 0.0286 | 52.27 | 4600 | 0.6565 | 0.6076 |
| 0.0304 | 54.54 | 4800 | 0.6931 | 0.6283 |
| 0.0285 | 56.82 | 5000 | 0.6966 | 0.6108 |
| 0.0279 | 59.09 | 5200 | 0.6473 | 0.5854 |
| 0.0276 | 61.36 | 5400 | 0.6497 | 0.5920 |
| 0.0238 | 63.63 | 5600 | 0.6283 | 0.5846 |
| 0.0237 | 65.91 | 5800 | 0.6871 | 0.5885 |
| 0.0221 | 68.18 | 6000 | 0.6518 | 0.5593 |
| 0.0221 | 70.45 | 6200 | 0.6676 | 0.5601 |
| 0.0215 | 72.73 | 6400 | 0.6299 | 0.5550 |
| 0.022 | 75.0 | 6600 | 0.6719 | 0.5636 |
| 0.0198 | 77.27 | 6800 | 0.6082 | 0.5569 |
| 0.0222 | 79.54 | 7000 | 0.6156 | 0.5589 |
| 0.0172 | 81.82 | 7200 | 0.6414 | 0.5636 |
| 0.0188 | 84.09 | 7400 | 0.5874 | 0.5347 |
| 0.0202 | 86.36 | 7600 | 0.6320 | 0.5421 |
| 0.0165 | 88.63 | 7800 | 0.6345 | 0.5304 |
| 0.0164 | 90.91 | 8000 | 0.6243 | 0.5289 |
| 0.0167 | 93.18 | 8200 | 0.6237 | 0.5285 |
| 0.015 | 95.45 | 8400 | 0.5937 | 0.5203 |
| 0.0169 | 97.73 | 8600 | 0.6171 | 0.5343 |
| 0.0147 | 100.0 | 8800 | 0.6857 | 0.5476 |
| 0.0164 | 102.27 | 9000 | 0.6099 | 0.5160 |
| 0.0152 | 104.54 | 9200 | 0.6319 | 0.5285 |
| 0.0149 | 106.82 | 9400 | 0.6133 | 0.5296 |
| 0.0155 | 109.09 | 9600 | 0.6237 | 0.5285 |
| 0.0149 | 111.36 | 9800 | 0.6127 | 0.5012 |
| 0.0142 | 113.63 | 10000 | 0.6119 | 0.4836 |
| 0.013 | 115.91 | 10200 | 0.5974 | 0.4746 |
| 0.012 | 118.18 | 10400 | 0.6296 | 0.5016 |
| 0.0137 | 120.45 | 10600 | 0.5990 | 0.5023 |
| 0.0146 | 122.73 | 10800 | 0.5784 | 0.4875 |
| 0.0117 | 125.0 | 11000 | 0.5436 | 0.4766 |
| 0.0133 | 127.27 | 11200 | 0.5890 | 0.5020 |
| 0.0133 | 129.54 | 11400 | 0.6028 | 0.4895 |
| 0.0119 | 131.82 | 11600 | 0.5483 | 0.4840 |
| 0.0133 | 134.09 | 11800 | 0.5638 | 0.4934 |
| 0.0108 | 136.36 | 12000 | 0.5750 | 0.4758 |
| 0.0098 | 138.63 | 12200 | 0.5978 | 0.4891 |
| 0.012 | 140.91 | 12400 | 0.5524 | 0.4805 |
| 0.01 | 143.18 | 12600 | 0.5731 | 0.4895 |
| 0.0125 | 145.45 | 12800 | 0.5583 | 0.4579 |
| 0.0102 | 147.73 | 13000 | 0.5806 | 0.5035 |
| 0.01 | 150.0 | 13200 | 0.5721 | 0.4711 |
| 0.0113 | 152.27 | 13400 | 0.5351 | 0.4602 |
| 0.011 | 154.54 | 13600 | 0.5472 | 0.4551 |
| 0.0078 | 156.82 | 13800 | 0.6011 | 0.4610 |
| 0.0105 | 159.09 | 14000 | 0.5702 | 0.4672 |
| 0.0081 | 161.36 | 14200 | 0.5643 | 0.4454 |
| 0.0088 | 163.63 | 14400 | 0.5084 | 0.4536 |
| 0.0094 | 165.91 | 14600 | 0.5320 | 0.4680 |
| 0.0083 | 168.18 | 14800 | 0.5175 | 0.4423 |
| 0.0095 | 170.45 | 15000 | 0.5213 | 0.4583 |
| 0.0097 | 172.73 | 15200 | 0.5242 | 0.4590 |
| 0.0092 | 175.0 | 15400 | 0.5680 | 0.4587 |
| 0.0081 | 177.27 | 15600 | 0.5668 | 0.4579 |
| 0.0075 | 179.54 | 15800 | 0.5602 | 0.4489 |
| 0.0094 | 181.82 | 16000 | 0.5540 | 0.4485 |
| 0.0083 | 184.09 | 16200 | 0.5367 | 0.4278 |
| 0.0084 | 186.36 | 16400 | 0.5376 | 0.4583 |
| 0.0093 | 188.63 | 16600 | 0.5599 | 0.4310 |
| 0.0085 | 190.91 | 16800 | 0.5356 | 0.4317 |
| 0.0066 | 193.18 | 17000 | 0.5517 | 0.4419 |
| 0.0074 | 195.45 | 17200 | 0.5401 | 0.4329 |
| 0.0094 | 197.73 | 17400 | 0.5067 | 0.4415 |
| 0.0078 | 200.0 | 17600 | 0.5410 | 0.4466 |
| 0.0085 | 202.27 | 17800 | 0.5157 | 0.4321 |
| 0.0081 | 204.54 | 18000 | 0.5390 | 0.4255 |
| 0.0068 | 206.82 | 18200 | 0.5566 | 0.4415 |
| 0.0069 | 209.09 | 18400 | 0.5693 | 0.4341 |
| 0.0089 | 211.36 | 18600 | 0.5588 | 0.4438 |
| 0.0086 | 213.63 | 18800 | 0.5656 | 0.4470 |
| 0.008 | 215.91 | 19000 | 0.5712 | 0.4438 |
| 0.0083 | 218.18 | 19200 | 0.5627 | 0.4423 |
| 0.0078 | 220.45 | 19400 | 0.5905 | 0.4298 |
| 0.0059 | 222.73 | 19600 | 0.5746 | 0.4228 |
| 0.0072 | 225.0 | 19800 | 0.5362 | 0.4275 |
| 0.006 | 227.27 | 20000 | 0.5909 | 0.4220 |
| 0.0074 | 229.54 | 20200 | 0.5863 | 0.4224 |
| 0.0079 | 231.82 | 20400 | 0.5366 | 0.4306 |
| 0.0066 | 234.09 | 20600 | 0.5128 | 0.4302 |
| 0.0068 | 236.36 | 20800 | 0.5436 | 0.4228 |
| 0.0073 | 238.63 | 21000 | 0.5731 | 0.4325 |
| 0.0081 | 240.91 | 21200 | 0.5189 | 0.4177 |
| 0.0061 | 243.18 | 21400 | 0.5593 | 0.4236 |
| 0.0061 | 245.45 | 21600 | 0.5553 | 0.4267 |
| 0.0044 | 247.73 | 21800 | 0.5763 | 0.4286 |
| 0.0064 | 250.0 | 22000 | 0.5360 | 0.4321 |
| 0.006 | 252.27 | 22200 | 0.5577 | 0.4372 |
| 0.0052 | 254.54 | 22400 | 0.5387 | 0.4122 |
| 0.0054 | 256.82 | 22600 | 0.5117 | 0.4239 |
| 0.0057 | 259.09 | 22800 | 0.5498 | 0.4232 |
| 0.0069 | 261.36 | 23000 | 0.5263 | 0.4353 |
| 0.005 | 263.63 | 23200 | 0.5147 | 0.4177 |
| 0.0058 | 265.91 | 23400 | 0.5273 | 0.4173 |
| 0.006 | 268.18 | 23600 | 0.5879 | 0.4380 |
| 0.0059 | 270.45 | 23800 | 0.5377 | 0.4349 |
| 0.0055 | 272.73 | 24000 | 0.6061 | 0.4364 |
| 0.0058 | 275.0 | 24200 | 0.5977 | 0.4353 |
| 0.0051 | 277.27 | 24400 | 0.5847 | 0.4208 |
| 0.0046 | 279.54 | 24600 | 0.5728 | 0.4333 |
| 0.006 | 281.82 | 24800 | 0.5392 | 0.4204 |
| 0.0074 | 284.09 | 25000 | 0.5618 | 0.4232 |
| 0.0058 | 286.36 | 25200 | 0.5449 | 0.4197 |
| 0.0057 | 288.63 | 25400 | 0.5635 | 0.4169 |
| 0.0054 | 290.91 | 25600 | 0.5313 | 0.4173 |
| 0.0044 | 293.18 | 25800 | 0.5544 | 0.4306 |
| 0.0039 | 295.45 | 26000 | 0.5392 | 0.4247 |
| 0.0054 | 297.73 | 26200 | 0.5395 | 0.4271 |
| 0.0044 | 300.0 | 26400 | 0.5489 | 0.4228 |
| 0.0042 | 302.27 | 26600 | 0.5414 | 0.4173 |
| 0.0051 | 304.54 | 26800 | 0.5198 | 0.4193 |
| 0.005 | 306.82 | 27000 | 0.5297 | 0.4146 |
| 0.0051 | 309.09 | 27200 | 0.5414 | 0.4212 |
| 0.0057 | 311.36 | 27400 | 0.5204 | 0.4228 |
| 0.0049 | 313.63 | 27600 | 0.5806 | 0.4239 |
| 0.0036 | 315.91 | 27800 | 0.5771 | 0.4173 |
| 0.0045 | 318.18 | 28000 | 0.5517 | 0.4239 |
| 0.0051 | 320.45 | 28200 | 0.5498 | 0.4173 |
| 0.0043 | 322.73 | 28400 | 0.5791 | 0.4181 |
| 0.0044 | 325.0 | 28600 | 0.6030 | 0.4200 |
| 0.0067 | 327.27 | 28800 | 0.5799 | 0.4208 |
| 0.0041 | 329.54 | 29000 | 0.5871 | 0.4134 |
| 0.0048 | 331.82 | 29200 | 0.5471 | 0.4158 |
| 0.0031 | 334.09 | 29400 | 0.5977 | 0.4220 |
| 0.0042 | 336.36 | 29600 | 0.5813 | 0.4181 |
| 0.0045 | 338.63 | 29800 | 0.6167 | 0.4306 |
| 0.0044 | 340.91 | 30000 | 0.5661 | 0.4173 |
| 0.0029 | 343.18 | 30200 | 0.5680 | 0.4158 |
| 0.0037 | 345.45 | 30400 | 0.5747 | 0.4204 |
| 0.005 | 347.73 | 30600 | 0.5883 | 0.4349 |
| 0.0037 | 350.0 | 30800 | 0.6187 | 0.4189 |
| 0.0044 | 352.27 | 31000 | 0.5834 | 0.4431 |
| 0.0047 | 354.54 | 31200 | 0.5567 | 0.4247 |
| 0.0039 | 356.82 | 31400 | 0.5900 | 0.4314 |
| 0.0044 | 359.09 | 31600 | 0.5879 | 0.4216 |
| 0.0042 | 361.36 | 31800 | 0.5639 | 0.4220 |
| 0.0046 | 363.63 | 32000 | 0.5292 | 0.4185 |
| 0.0043 | 365.91 | 32200 | 0.5640 | 0.4353 |
| 0.0033 | 368.18 | 32400 | 0.5468 | 0.4208 |
| 0.002 | 370.45 | 32600 | 0.5836 | 0.4220 |
| 0.0043 | 372.73 | 32800 | 0.5692 | 0.4142 |
| 0.0038 | 375.0 | 33000 | 0.5739 | 0.4177 |
| 0.0039 | 377.27 | 33200 | 0.5824 | 0.4103 |
| 0.0028 | 379.54 | 33400 | 0.6069 | 0.4111 |
| 0.0038 | 381.82 | 33600 | 0.5868 | 0.4185 |
| 0.0041 | 384.09 | 33800 | 0.5169 | 0.4126 |
| 0.0037 | 386.36 | 34000 | 0.5395 | 0.4275 |
| 0.0063 | 388.63 | 34200 | 0.5293 | 0.4294 |
| 0.0042 | 390.91 | 34400 | 0.5472 | 0.4165 |
| 0.0039 | 393.18 | 34600 | 0.5391 | 0.4091 |
| 0.0036 | 395.45 | 34800 | 0.5360 | 0.4239 |
| 0.0036 | 397.73 | 35000 | 0.5511 | 0.4177 |
| 0.0019 | 400.0 | 35200 | 0.5775 | 0.4115 |
| 0.0038 | 402.27 | 35400 | 0.5376 | 0.4087 |
| 0.0035 | 404.54 | 35600 | 0.5755 | 0.4130 |
| 0.0042 | 406.82 | 35800 | 0.5443 | 0.4087 |
| 0.0036 | 409.09 | 36000 | 0.6091 | 0.4200 |
| 0.004 | 411.36 | 36200 | 0.5817 | 0.4247 |
| 0.0039 | 413.63 | 36400 | 0.5779 | 0.4255 |
| 0.003 | 415.91 | 36600 | 0.5804 | 0.4224 |
| 0.0031 | 418.18 | 36800 | 0.5467 | 0.4138 |
| 0.0044 | 420.45 | 37000 | 0.5628 | 0.4212 |
| 0.0036 | 422.73 | 37200 | 0.5613 | 0.4267 |
| 0.0035 | 425.0 | 37400 | 0.5537 | 0.4224 |
| 0.0028 | 427.27 | 37600 | 0.6016 | 0.4161 |
| 0.004 | 429.54 | 37800 | 0.5711 | 0.4216 |
| 0.0041 | 431.82 | 38000 | 0.5510 | 0.4165 |
| 0.0035 | 434.09 | 38200 | 0.5487 | 0.4181 |
| 0.0034 | 436.36 | 38400 | 0.5392 | 0.4056 |
| 0.003 | 438.63 | 38600 | 0.5255 | 0.4083 |
| 0.0035 | 440.91 | 38800 | 0.5511 | 0.4138 |
| 0.0031 | 443.18 | 39000 | 0.5464 | 0.4146 |
| 0.0032 | 445.45 | 39200 | 0.5514 | 0.4134 |
| 0.0017 | 447.73 | 39400 | 0.5664 | 0.4064 |
| 0.0024 | 450.0 | 39600 | 0.5966 | 0.4220 |
| 0.0021 | 452.27 | 39800 | 0.5780 | 0.4122 |
| 0.0035 | 454.54 | 40000 | 0.5612 | 0.4341 |
| 0.002 | 456.82 | 40200 | 0.5954 | 0.4247 |
| 0.0018 | 459.09 | 40400 | 0.6006 | 0.4251 |
| 0.0026 | 461.36 | 40600 | 0.6119 | 0.4232 |
| 0.0023 | 463.63 | 40800 | 0.6051 | 0.4306 |
| 0.003 | 465.91 | 41000 | 0.5872 | 0.4267 |
| 0.0036 | 468.18 | 41200 | 0.5602 | 0.4095 |
| 0.0029 | 470.45 | 41400 | 0.5877 | 0.4189 |
| 0.0034 | 472.73 | 41600 | 0.5918 | 0.4337 |
| 0.0025 | 475.0 | 41800 | 0.6101 | 0.4337 |
| 0.0023 | 477.27 | 42000 | 0.5936 | 0.4239 |
| 0.0017 | 479.54 | 42200 | 0.6257 | 0.4275 |
| 0.0029 | 481.82 | 42400 | 0.6265 | 0.4251 |
| 0.0035 | 484.09 | 42600 | 0.6035 | 0.4271 |
| 0.0036 | 486.36 | 42800 | 0.5954 | 0.4243 |
| 0.0028 | 488.63 | 43000 | 0.5810 | 0.4259 |
| 0.0027 | 490.91 | 43200 | 0.6093 | 0.4228 |
| 0.0025 | 493.18 | 43400 | 0.6241 | 0.4302 |
| 0.0019 | 495.45 | 43600 | 0.6143 | 0.4290 |
| 0.0025 | 497.73 | 43800 | 0.5729 | 0.4189 |
| 0.0028 | 500.0 | 44000 | 0.5725 | 0.4165 |
| 0.0023 | 502.27 | 44200 | 0.5888 | 0.4263 |
| 0.0034 | 504.54 | 44400 | 0.5771 | 0.4337 |
| 0.0022 | 506.82 | 44600 | 0.5888 | 0.4216 |
| 0.0028 | 509.09 | 44800 | 0.5598 | 0.4181 |
| 0.0024 | 511.36 | 45000 | 0.6114 | 0.4392 |
| 0.0037 | 513.63 | 45200 | 0.5855 | 0.4236 |
| 0.0018 | 515.91 | 45400 | 0.5885 | 0.4232 |
| 0.0025 | 518.18 | 45600 | 0.5845 | 0.4255 |
| 0.0029 | 520.45 | 45800 | 0.5862 | 0.4380 |
| 0.0034 | 522.73 | 46000 | 0.5807 | 0.4329 |
| 0.0025 | 525.0 | 46200 | 0.5959 | 0.4189 |
| 0.0025 | 527.27 | 46400 | 0.5939 | 0.4216 |
| 0.0022 | 529.54 | 46600 | 0.5964 | 0.4232 |
| 0.003 | 531.82 | 46800 | 0.5664 | 0.4173 |
| 0.0021 | 534.09 | 47000 | 0.5670 | 0.4138 |
| 0.0025 | 536.36 | 47200 | 0.5611 | 0.4247 |
| 0.0024 | 538.63 | 47400 | 0.5691 | 0.4321 |
| 0.0019 | 540.91 | 47600 | 0.5992 | 0.4224 |
| 0.0037 | 543.18 | 47800 | 0.5790 | 0.4181 |
| 0.0025 | 545.45 | 48000 | 0.5650 | 0.4294 |
| 0.0025 | 547.73 | 48200 | 0.5732 | 0.4189 |
| 0.0025 | 550.0 | 48400 | 0.5566 | 0.4220 |
| 0.0023 | 552.27 | 48600 | 0.5646 | 0.4236 |
| 0.0027 | 554.54 | 48800 | 0.5437 | 0.4263 |
| 0.0026 | 556.82 | 49000 | 0.5993 | 0.4239 |
| 0.0017 | 559.09 | 49200 | 0.6158 | 0.4212 |
| 0.002 | 561.36 | 49400 | 0.6104 | 0.4064 |
| 0.0028 | 563.63 | 49600 | 0.5689 | 0.4021 |
| 0.0025 | 565.91 | 49800 | 0.5760 | 0.4029 |
| 0.0024 | 568.18 | 50000 | 0.5700 | 0.4037 |
| 0.0024 | 570.45 | 50200 | 0.5509 | 0.3935 |
| 0.0018 | 572.73 | 50400 | 0.5562 | 0.4048 |
| 0.0018 | 575.0 | 50600 | 0.5786 | 0.3955 |
| 0.0023 | 577.27 | 50800 | 0.5855 | 0.3959 |
| 0.0017 | 579.54 | 51000 | 0.5988 | 0.3939 |
| 0.0021 | 581.82 | 51200 | 0.6132 | 0.4064 |
| 0.0017 | 584.09 | 51400 | 0.6202 | 0.4099 |
| 0.0019 | 586.36 | 51600 | 0.6118 | 0.4048 |
| 0.0023 | 588.63 | 51800 | 0.6114 | 0.4158 |
| 0.0019 | 590.91 | 52000 | 0.5808 | 0.4126 |
| 0.0025 | 593.18 | 52200 | 0.5906 | 0.4037 |
| 0.0016 | 595.45 | 52400 | 0.5965 | 0.4056 |
| 0.0021 | 597.73 | 52600 | 0.6126 | 0.4099 |
| 0.0019 | 600.0 | 52800 | 0.5913 | 0.4060 |
| 0.0014 | 602.27 | 53000 | 0.6450 | 0.4076 |
| 0.0021 | 604.54 | 53200 | 0.6500 | 0.4189 |
| 0.002 | 606.82 | 53400 | 0.6026 | 0.4111 |
| 0.0022 | 609.09 | 53600 | 0.6318 | 0.4099 |
| 0.003 | 611.36 | 53800 | 0.6038 | 0.4111 |
| 0.0022 | 613.63 | 54000 | 0.6086 | 0.4083 |
| 0.0013 | 615.91 | 54200 | 0.6320 | 0.4025 |
| 0.0016 | 618.18 | 54400 | 0.6159 | 0.3974 |
| 0.0018 | 620.45 | 54600 | 0.6266 | 0.3998 |
| 0.002 | 622.73 | 54800 | 0.5920 | 0.3994 |
| 0.001 | 625.0 | 55000 | 0.6196 | 0.3935 |
| 0.0018 | 627.27 | 55200 | 0.6468 | 0.4009 |
| 0.002 | 629.54 | 55400 | 0.6505 | 0.4052 |
| 0.002 | 631.82 | 55600 | 0.6362 | 0.4072 |
| 0.0018 | 634.09 | 55800 | 0.6430 | 0.3963 |
| 0.0017 | 636.36 | 56000 | 0.6434 | 0.3966 |
| 0.0014 | 638.63 | 56200 | 0.6473 | 0.4080 |
| 0.0021 | 640.91 | 56400 | 0.6272 | 0.4115 |
| 0.0026 | 643.18 | 56600 | 0.6343 | 0.4099 |
| 0.0023 | 645.45 | 56800 | 0.6223 | 0.4025 |
| 0.0016 | 647.73 | 57000 | 0.5879 | 0.4025 |
| 0.001 | 650.0 | 57200 | 0.6274 | 0.4005 |
| 0.0019 | 652.27 | 57400 | 0.6517 | 0.4044 |
| 0.0011 | 654.54 | 57600 | 0.6571 | 0.4080 |
| 0.002 | 656.82 | 57800 | 0.6377 | 0.4087 |
| 0.0024 | 659.09 | 58000 | 0.6013 | 0.4146 |
| 0.0021 | 661.36 | 58200 | 0.5985 | 0.4185 |
| 0.0018 | 663.63 | 58400 | 0.6148 | 0.4150 |
| 0.0015 | 665.91 | 58600 | 0.6318 | 0.4013 |
| 0.0016 | 668.18 | 58800 | 0.6109 | 0.4025 |
| 0.002 | 670.45 | 59000 | 0.5823 | 0.4029 |
| 0.0013 | 672.73 | 59200 | 0.5800 | 0.4146 |
| 0.0018 | 675.0 | 59400 | 0.5794 | 0.4080 |
| 0.0012 | 677.27 | 59600 | 0.5997 | 0.4037 |
| 0.0016 | 679.54 | 59800 | 0.6111 | 0.4005 |
| 0.0019 | 681.82 | 60000 | 0.6112 | 0.4099 |
| 0.0022 | 684.09 | 60200 | 0.6030 | 0.4068 |
| 0.0013 | 686.36 | 60400 | 0.6247 | 0.4115 |
| 0.0017 | 688.63 | 60600 | 0.5981 | 0.4111 |
| 0.0016 | 690.91 | 60800 | 0.5773 | 0.4122 |
| 0.0016 | 693.18 | 61000 | 0.6019 | 0.4068 |
| 0.0014 | 695.45 | 61200 | 0.5931 | 0.4021 |
| 0.0015 | 697.73 | 61400 | 0.6391 | 0.4083 |
| 0.0015 | 700.0 | 61600 | 0.6148 | 0.4021 |
| 0.0013 | 702.27 | 61800 | 0.6143 | 0.4138 |
| 0.0009 | 704.54 | 62000 | 0.6203 | 0.4115 |
| 0.0015 | 706.82 | 62200 | 0.6452 | 0.4115 |
| 0.0011 | 709.09 | 62400 | 0.6323 | 0.4107 |
| 0.0025 | 711.36 | 62600 | 0.6248 | 0.4243 |
| 0.001 | 713.63 | 62800 | 0.6225 | 0.4189 |
| 0.0013 | 715.91 | 63000 | 0.6328 | 0.4161 |
| 0.0011 | 718.18 | 63200 | 0.6299 | 0.4130 |
| 0.0016 | 720.45 | 63400 | 0.6110 | 0.4072 |
| 0.0012 | 722.73 | 63600 | 0.6095 | 0.4064 |
| 0.0017 | 725.0 | 63800 | 0.6205 | 0.4033 |
| 0.0009 | 727.27 | 64000 | 0.6330 | 0.4099 |
| 0.0011 | 729.54 | 64200 | 0.6184 | 0.3974 |
| 0.0016 | 731.82 | 64400 | 0.6147 | 0.4052 |
| 0.0014 | 734.09 | 64600 | 0.6271 | 0.4068 |
| 0.0013 | 736.36 | 64800 | 0.6157 | 0.4091 |
| 0.0017 | 738.63 | 65000 | 0.6157 | 0.4072 |
| 0.0022 | 740.91 | 65200 | 0.5888 | 0.4177 |
| 0.0017 | 743.18 | 65400 | 0.6002 | 0.4134 |
| 0.0017 | 745.45 | 65600 | 0.5989 | 0.4161 |
| 0.0016 | 747.73 | 65800 | 0.6069 | 0.4185 |
| 0.0019 | 750.0 | 66000 | 0.5962 | 0.4212 |
| 0.0011 | 752.27 | 66200 | 0.6044 | 0.4161 |
| 0.0014 | 754.54 | 66400 | 0.5978 | 0.4197 |
| 0.0008 | 756.82 | 66600 | 0.6291 | 0.4146 |
| 0.0009 | 759.09 | 66800 | 0.6203 | 0.4181 |
| 0.0009 | 761.36 | 67000 | 0.6124 | 0.4138 |
| 0.0013 | 763.63 | 67200 | 0.6191 | 0.4138 |
| 0.0017 | 765.91 | 67400 | 0.6061 | 0.4087 |
| 0.001 | 768.18 | 67600 | 0.6233 | 0.4111 |
| 0.0014 | 770.45 | 67800 | 0.6189 | 0.4080 |
| 0.0013 | 772.73 | 68000 | 0.6493 | 0.4056 |
| 0.0013 | 775.0 | 68200 | 0.6454 | 0.4037 |
| 0.0013 | 777.27 | 68400 | 0.6373 | 0.4095 |
| 0.0011 | 779.54 | 68600 | 0.6563 | 0.4041 |
| 0.0013 | 781.82 | 68800 | 0.6622 | 0.4122 |
| 0.0012 | 784.09 | 69000 | 0.6858 | 0.4220 |
| 0.0019 | 786.36 | 69200 | 0.6658 | 0.4126 |
| 0.001 | 788.63 | 69400 | 0.6650 | 0.4068 |
| 0.0007 | 790.91 | 69600 | 0.6777 | 0.4107 |
| 0.0011 | 793.18 | 69800 | 0.6772 | 0.4158 |
| 0.001 | 795.45 | 70000 | 0.6820 | 0.4173 |
| 0.0007 | 797.73 | 70200 | 0.6870 | 0.4138 |
| 0.0011 | 800.0 | 70400 | 0.6732 | 0.4115 |
| 0.0011 | 802.27 | 70600 | 0.6755 | 0.4154 |
| 0.0009 | 804.54 | 70800 | 0.6707 | 0.4224 |
| 0.0014 | 806.82 | 71000 | 0.6733 | 0.4134 |
| 0.0009 | 809.09 | 71200 | 0.6690 | 0.4142 |
| 0.0011 | 811.36 | 71400 | 0.6875 | 0.4169 |
| 0.0019 | 813.63 | 71600 | 0.6471 | 0.4138 |
| 0.0006 | 815.91 | 71800 | 0.6599 | 0.4099 |
| 0.0014 | 818.18 | 72000 | 0.6543 | 0.4052 |
| 0.0011 | 820.45 | 72200 | 0.6699 | 0.4052 |
| 0.0014 | 822.73 | 72400 | 0.6626 | 0.4080 |
| 0.0014 | 825.0 | 72600 | 0.6601 | 0.4142 |
| 0.0007 | 827.27 | 72800 | 0.6686 | 0.4115 |
| 0.0007 | 829.54 | 73000 | 0.6657 | 0.4134 |
| 0.0009 | 831.82 | 73200 | 0.6810 | 0.4056 |
| 0.0013 | 834.09 | 73400 | 0.6734 | 0.4060 |
| 0.0005 | 836.36 | 73600 | 0.6815 | 0.4033 |
| 0.0026 | 838.63 | 73800 | 0.6607 | 0.4056 |
| 0.001 | 840.91 | 74000 | 0.6700 | 0.4041 |
| 0.0008 | 843.18 | 74200 | 0.6871 | 0.4041 |
| 0.0006 | 845.45 | 74400 | 0.6910 | 0.4099 |
| 0.0009 | 847.73 | 74600 | 0.7027 | 0.4064 |
| 0.0009 | 850.0 | 74800 | 0.7108 | 0.4017 |
| 0.0005 | 852.27 | 75000 | 0.7122 | 0.3986 |
| 0.001 | 854.54 | 75200 | 0.7051 | 0.3982 |
| 0.0007 | 856.82 | 75400 | 0.7266 | 0.3978 |
| 0.0015 | 859.09 | 75600 | 0.7051 | 0.4017 |
| 0.0007 | 861.36 | 75800 | 0.7038 | 0.3970 |
| 0.001 | 863.63 | 76000 | 0.6847 | 0.4037 |
| 0.0013 | 865.91 | 76200 | 0.6823 | 0.4033 |
| 0.001 | 868.18 | 76400 | 0.6926 | 0.4060 |
| 0.0018 | 870.45 | 76600 | 0.7035 | 0.4025 |
| 0.0007 | 872.73 | 76800 | 0.6993 | 0.4048 |
| 0.0006 | 875.0 | 77000 | 0.7083 | 0.4048 |
| 0.001 | 877.27 | 77200 | 0.7217 | 0.4083 |
| 0.0014 | 879.54 | 77400 | 0.7013 | 0.4076 |
| 0.0009 | 881.82 | 77600 | 0.6874 | 0.4083 |
| 0.0012 | 884.09 | 77800 | 0.6966 | 0.4103 |
| 0.0008 | 886.36 | 78000 | 0.6989 | 0.3982 |
| 0.001 | 888.63 | 78200 | 0.7000 | 0.4115 |
| 0.0011 | 890.91 | 78400 | 0.7105 | 0.4107 |
| 0.0008 | 893.18 | 78600 | 0.7103 | 0.4068 |
| 0.0022 | 895.45 | 78800 | 0.6641 | 0.4033 |
| 0.0006 | 897.73 | 79000 | 0.6635 | 0.4048 |
| 0.0009 | 900.0 | 79200 | 0.6535 | 0.4072 |
| 0.0009 | 902.27 | 79400 | 0.6598 | 0.4048 |
| 0.0007 | 904.54 | 79600 | 0.6684 | 0.4017 |
| 0.0008 | 906.82 | 79800 | 0.6752 | 0.4009 |
| 0.0008 | 909.09 | 80000 | 0.6820 | 0.4037 |
| 0.0009 | 911.36 | 80200 | 0.6672 | 0.3986 |
| 0.0007 | 913.63 | 80400 | 0.6692 | 0.4025 |
| 0.001 | 915.91 | 80600 | 0.6676 | 0.4056 |
| 0.0012 | 918.18 | 80800 | 0.6484 | 0.4002 |
| 0.0008 | 920.45 | 81000 | 0.6541 | 0.4002 |
| 0.0005 | 922.73 | 81200 | 0.6626 | 0.3990 |
| 0.0013 | 925.0 | 81400 | 0.6688 | 0.3994 |
| 0.0015 | 927.27 | 81600 | 0.6472 | 0.4048 |
| 0.0011 | 929.54 | 81800 | 0.6432 | 0.4041 |
| 0.0012 | 931.82 | 82000 | 0.6374 | 0.3939 |
| 0.0005 | 934.09 | 82200 | 0.6519 | 0.4005 |
| 0.001 | 936.36 | 82400 | 0.6281 | 0.3998 |
| 0.0007 | 938.63 | 82600 | 0.6621 | 0.4048 |
| 0.0005 | 940.91 | 82800 | 0.6670 | 0.3990 |
| 0.0009 | 943.18 | 83000 | 0.6707 | 0.3982 |
| 0.0006 | 945.45 | 83200 | 0.6592 | 0.3924 |
| 0.0006 | 947.73 | 83400 | 0.6772 | 0.4002 |
| 0.0017 | 950.0 | 83600 | 0.6786 | 0.4068 |
| 0.0004 | 952.27 | 83800 | 0.6849 | 0.4052 |
| 0.0002 | 954.54 | 84000 | 0.6914 | 0.4044 |
| 0.0009 | 956.82 | 84200 | 0.6806 | 0.4002 |
| 0.0006 | 959.09 | 84400 | 0.6621 | 0.4013 |
| 0.0004 | 961.36 | 84600 | 0.6712 | 0.4029 |
| 0.0007 | 963.63 | 84800 | 0.6775 | 0.4052 |
| 0.0004 | 965.91 | 85000 | 0.6769 | 0.4080 |
| 0.001 | 968.18 | 85200 | 0.6470 | 0.4029 |
| 0.0009 | 970.45 | 85400 | 0.6505 | 0.4002 |
| 0.0011 | 972.73 | 85600 | 0.6543 | 0.4041 |
| 0.0003 | 975.0 | 85800 | 0.6568 | 0.4009 |
| 0.0004 | 977.27 | 86000 | 0.6627 | 0.3990 |
| 0.0014 | 979.54 | 86200 | 0.6564 | 0.4021 |
| 0.0012 | 981.82 | 86400 | 0.6535 | 0.3982 |
| 0.0007 | 984.09 | 86600 | 0.6443 | 0.4009 |
| 0.0008 | 986.36 | 86800 | 0.6466 | 0.4005 |
| 0.0004 | 988.63 | 87000 | 0.6538 | 0.4017 |
| 0.0008 | 990.91 | 87200 | 0.6485 | 0.3998 |
| 0.0004 | 993.18 | 87400 | 0.6504 | 0.3951 |
| 0.0008 | 995.45 | 87600 | 0.6410 | 0.3970 |
| 0.0004 | 997.73 | 87800 | 0.6420 | 0.3986 |
| 0.0005 | 1000.0 | 88000 | 0.6507 | 0.3998 |
| 0.0005 | 1002.27 | 88200 | 0.6540 | 0.3998 |
| 0.0006 | 1004.54 | 88400 | 0.6531 | 0.3978 |
| 0.0015 | 1006.82 | 88600 | 0.6411 | 0.3986 |
| 0.0007 | 1009.09 | 88800 | 0.6411 | 0.3990 |
| 0.0003 | 1011.36 | 89000 | 0.6432 | 0.3998 |
| 0.0004 | 1013.63 | 89200 | 0.6546 | 0.4021 |
| 0.0004 | 1015.91 | 89400 | 0.6542 | 0.4002 |
| 0.0006 | 1018.18 | 89600 | 0.6622 | 0.4009 |
| 0.0008 | 1020.45 | 89800 | 0.6674 | 0.3963 |
| 0.0008 | 1022.73 | 90000 | 0.6563 | 0.3935 |
| 0.0003 | 1025.0 | 90200 | 0.6638 | 0.3955 |
| 0.0004 | 1027.27 | 90400 | 0.6667 | 0.3951 |
| 0.001 | 1029.54 | 90600 | 0.6462 | 0.3943 |
| 0.0007 | 1031.82 | 90800 | 0.6462 | 0.3920 |
| 0.0006 | 1034.09 | 91000 | 0.6477 | 0.3947 |
| 0.0005 | 1036.36 | 91200 | 0.6500 | 0.3955 |
| 0.0006 | 1038.63 | 91400 | 0.6461 | 0.3955 |
| 0.0007 | 1040.91 | 91600 | 0.6526 | 0.4002 |
| 0.0004 | 1043.18 | 91800 | 0.6514 | 0.4021 |
| 0.0003 | 1045.45 | 92000 | 0.6610 | 0.4025 |
| 0.0007 | 1047.73 | 92200 | 0.6583 | 0.3966 |
| 0.0004 | 1050.0 | 92400 | 0.6413 | 0.3955 |
| 0.0009 | 1052.27 | 92600 | 0.6411 | 0.3951 |
| 0.0008 | 1054.54 | 92800 | 0.6374 | 0.3978 |
| 0.0003 | 1056.82 | 93000 | 0.6359 | 0.3955 |
| 0.0006 | 1059.09 | 93200 | 0.6400 | 0.3955 |
| 0.0007 | 1061.36 | 93400 | 0.6363 | 0.3974 |
| 0.0002 | 1063.63 | 93600 | 0.6413 | 0.3959 |
| 0.0006 | 1065.91 | 93800 | 0.6428 | 0.3927 |
| 0.0007 | 1068.18 | 94000 | 0.6388 | 0.3912 |
| 0.0007 | 1070.45 | 94200 | 0.6371 | 0.3920 |
| 0.0005 | 1072.73 | 94400 | 0.6449 | 0.3904 |
| 0.0015 | 1075.0 | 94600 | 0.6415 | 0.3916 |
| 0.0005 | 1077.27 | 94800 | 0.6355 | 0.3920 |
| 0.0005 | 1079.54 | 95000 | 0.6362 | 0.3920 |
| 0.0004 | 1081.82 | 95200 | 0.6303 | 0.3931 |
| 0.0011 | 1084.09 | 95400 | 0.6255 | 0.3955 |
| 0.0003 | 1086.36 | 95600 | 0.6314 | 0.3959 |
| 0.0005 | 1088.63 | 95800 | 0.6353 | 0.3943 |
| 0.0007 | 1090.91 | 96000 | 0.6398 | 0.3931 |
| 0.0003 | 1093.18 | 96200 | 0.6472 | 0.3963 |
| 0.0007 | 1095.45 | 96400 | 0.6479 | 0.3947 |
| 0.0005 | 1097.73 | 96600 | 0.6520 | 0.3947 |
| 0.0005 | 1100.0 | 96800 | 0.6569 | 0.3963 |
| 0.0007 | 1102.27 | 97000 | 0.6551 | 0.3982 |
| 0.0004 | 1104.54 | 97200 | 0.6554 | 0.3966 |
| 0.0013 | 1106.82 | 97400 | 0.6404 | 0.3963 |
| 0.0008 | 1109.09 | 97600 | 0.6421 | 0.3963 |
| 0.0007 | 1111.36 | 97800 | 0.6379 | 0.3931 |
| 0.0003 | 1113.63 | 98000 | 0.6403 | 0.3931 |
| 0.0003 | 1115.91 | 98200 | 0.6443 | 0.3916 |
| 0.0004 | 1118.18 | 98400 | 0.6461 | 0.3986 |
| 0.0003 | 1120.45 | 98600 | 0.6440 | 0.3978 |
| 0.0001 | 1122.73 | 98800 | 0.6480 | 0.4002 |
| 0.0006 | 1125.0 | 99000 | 0.6497 | 0.4005 |
| 0.0004 | 1127.27 | 99200 | 0.6533 | 0.4009 |
| 0.0007 | 1129.54 | 99400 | 0.6461 | 0.3978 |
| 0.0006 | 1131.82 | 99600 | 0.6453 | 0.3986 |
| 0.0001 | 1134.09 | 99800 | 0.6478 | 0.4002 |
| 0.0005 | 1136.36 | 100000 | 0.6508 | 0.3978 |
| 0.0004 | 1138.63 | 100200 | 0.6500 | 0.3955 |
| 0.0006 | 1140.91 | 100400 | 0.6521 | 0.3924 |
| 0.0004 | 1143.18 | 100600 | 0.6543 | 0.3931 |
| 0.0004 | 1145.45 | 100800 | 0.6552 | 0.3935 |
| 0.0006 | 1147.73 | 101000 | 0.6550 | 0.3931 |
| 0.0002 | 1150.0 | 101200 | 0.6567 | 0.3924 |
| 0.0002 | 1152.27 | 101400 | 0.6585 | 0.3912 |
| 0.0006 | 1154.54 | 101600 | 0.6588 | 0.3900 |
| 0.0006 | 1156.82 | 101800 | 0.6583 | 0.3892 |
| 0.0007 | 1159.09 | 102000 | 0.6579 | 0.3916 |
| 0.0003 | 1161.36 | 102200 | 0.6588 | 0.3908 |
| 0.0004 | 1163.63 | 102400 | 0.6603 | 0.3912 |
| 0.0004 | 1165.91 | 102600 | 0.6602 | 0.3916 |
| 0.0004 | 1168.18 | 102800 | 0.6596 | 0.3916 |
| 0.0007 | 1170.45 | 103000 | 0.6577 | 0.3924 |
| 0.0003 | 1172.73 | 103200 | 0.6593 | 0.3900 |
| 0.0004 | 1175.0 | 103400 | 0.6577 | 0.3900 |
| 0.0006 | 1177.27 | 103600 | 0.6554 | 0.3900 |
| 0.0004 | 1179.54 | 103800 | 0.6554 | 0.3885 |
| 0.0005 | 1181.82 | 104000 | 0.6545 | 0.3873 |
| 0.0003 | 1184.09 | 104200 | 0.6545 | 0.3885 |
| 0.0002 | 1186.36 | 104400 | 0.6546 | 0.3888 |
| 0.0006 | 1188.63 | 104600 | 0.6547 | 0.3892 |
| 0.0007 | 1190.91 | 104800 | 0.6542 | 0.3885 |
| 0.0002 | 1193.18 | 105000 | 0.6543 | 0.3885 |
| 0.0003 | 1195.45 | 105200 | 0.6544 | 0.3881 |
| 0.0004 | 1197.73 | 105400 | 0.6544 | 0.3881 |
| 0.0009 | 1200.0 | 105600 | 0.6544 | 0.3881 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.11.6
|
enelpol/evalatin2022-feats-open | 40c9ad8e5686a28c8439c35ad33c08ef2fe04194 | 2022-04-15T21:22:56.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | enelpol | null | enelpol/evalatin2022-feats-open | 1 | null | transformers | 31,264 | Entry not found |
adnankhawaja/B_T_FB_LM | 4a5037c335f52ef031c43a18294a8979e2e616be | 2022-04-16T06:39:39.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | adnankhawaja | null | adnankhawaja/B_T_FB_LM | 1 | null | transformers | 31,265 | Entry not found |
chrisvinsen/wav2vec2-base-commonvoice-demo-colab-2 | 4af967b2e5fd8905c9968f30f8b76a866dfab004 | 2022-04-16T10:54:34.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/wav2vec2-base-commonvoice-demo-colab-2 | 1 | null | transformers | 31,266 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-commonvoice-demo-colab-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-commonvoice-demo-colab-2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 4.7784 | 2.58 | 500 | 2.9962 | 1.0 |
| 3.0067 | 5.15 | 1000 | 3.0303 | 1.0 |
| 3.0098 | 7.73 | 1500 | 3.0305 | 1.0 |
| 3.0015 | 10.31 | 2000 | 3.0308 | 1.0 |
| 3.0062 | 12.89 | 2500 | 3.0310 | 1.0 |
| 3.0074 | 15.46 | 3000 | 3.0311 | 1.0 |
| 3.0085 | 18.04 | 3500 | 3.0313 | 1.0 |
| 3.0046 | 20.62 | 4000 | 3.0314 | 1.0 |
| 3.0065 | 23.2 | 4500 | nan | 1.0 |
| 0.0 | 25.77 | 5000 | nan | 1.0 |
| 0.0 | 28.35 | 5500 | nan | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
jackh1995/bert-finetuned | 957c8e16094042e7c88c939791f3624b55d57c65 | 2022-04-16T09:09:24.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | jackh1995 | null | jackh1995/bert-finetuned | 1 | null | transformers | 31,267 | Entry not found |
masakhane/afrimt5_fon_fr_news | 2dcce5961e39d72c03535517abb47dd14f8defa6 | 2022-04-16T13:06:17.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afrimt5_fon_fr_news | 1 | null | transformers | 31,268 | ---
license: afl-3.0
---
|
masakhane/mbart50_fon_fr_news | 6fe1269c20065594b0034c4a635d45e271b3a782 | 2022-04-16T14:01:57.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/mbart50_fon_fr_news | 1 | null | transformers | 31,269 | ---
license: afl-3.0
---
|
masakhane/afrimbart_fon_fr_news | a599297124e3e3e3cb2e0c4c127529cb9df12a9c | 2022-04-16T14:02:01.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afrimbart_fon_fr_news | 1 | null | transformers | 31,270 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_fr_fon_news | 9f051a504751eca8e8986367f74da62eca4ccbf5 | 2022-04-16T17:53:18.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_fr_fon_news | 1 | null | transformers | 31,271 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_fon_fr_rel_news_ft | 8dbe48bb6cc9c0da83e43cf4333852c3cea3e351 | 2022-04-16T17:53:25.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_fon_fr_rel_news_ft | 1 | null | transformers | 31,272 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_fr_fon_rel_ft | ab86fac0e987464aa77ea211f3058a54caa2c4ee | 2022-04-16T18:51:46.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_fr_fon_rel_ft | 1 | null | transformers | 31,273 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_fon_fr_rel_ft | b04373c940f4cd5f04c3dae4ea0bd6756aa82c3c | 2022-04-16T18:51:43.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_fon_fr_rel_ft | 1 | null | transformers | 31,274 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_fon_fr_rel | abd26bb8c1a7b1af9c2b9b965c0b6b9b10821daf | 2022-04-16T18:51:53.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_fon_fr_rel | 1 | null | transformers | 31,275 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_fr_fon_rel | fc9ba13621137f1a2a29703db058db1badd3c843 | 2022-04-16T18:51:50.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_fr_fon_rel | 1 | null | transformers | 31,276 | ---
license: afl-3.0
---
|
haryoaw/id-recigen-bart | ec17889bda186bdb2dccdf16e843c3aa64f6fde1 | 2022-04-17T10:19:27.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"id",
"transformers",
"bart",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | haryoaw | null | haryoaw/id-recigen-bart | 1 | 1 | transformers | 31,277 | ---
language: id
tags:
- bart
- id
license: mit
---
# Indonesia Recipe Ingredients Generator Model
**WARNING: inference on Huggingface might not run since the tokenizer used is not transformers's tokenizer.**
Feel free to test the model [in this space](https://huggingface.co/spaces/haryoaw/id-recigen)
😎 **Have fun on generating ingredients** 😎
This is a fine-tuned model to generate the Indonesian food ingredients. One of my personal project that I did in my free time.
Basically, you give the name of the food and it will produce the ingredients of the food.
## Model
Data: [Indonesian Recipe Data on Kaggle](https://www.kaggle.com/datasets/canggih/indonesian-food-recipes)
Pre-trained Model: [IndoBART-v2](https://huggingface.co/indobenchmark/indobart-v2)
## How to use
We will specify the usage of the tokenizer and the model.
### Tokenizer
Since we use `indobart-v2`, we need to use their tokenizer.
First, install the tokenizer by doing `pip install indobenchmark-toolkit`.
After that, you can load the tokenizer:
```python
from indobenchmark.tokenization_indonlg import IndoNLGTokenizer
tokenizer = IndoNLGTokenizer.from_pretrained("haryoaw/id-recigen-bart")
```
**EDIT**:
Seems like the tokenizer in the package is not the same as the one that I use to finetune the model.
There are some noticeable bug such as some subword tokens are not considered as subword. Nevertheless, it stil works!
### Model
The model can be loaded by using AutoModel.
```python
from transformers import AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("haryoaw/id-recigen-bart")
```
## Input Example
Make sure to input a **LOWERCASE** food name. The tokenizer is case-sensitive!
```
sayur asam
```
```
nasi goreng ayam
```
~To be continued..
|
masakhane/afrimt5_fr_mos_news | 8e962e355b2fa72065bc19b3f727918a7585e3c3 | 2022-04-17T06:42:42.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afrimt5_fr_mos_news | 1 | null | transformers | 31,278 | ---
license: afl-3.0
---
|
clapika2010/hospital_finetuned2 | d678fe2a235bd0bdc9c55d2135dc4723ad3e1d5d | 2022-04-16T23:56:32.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | clapika2010 | null | clapika2010/hospital_finetuned2 | 1 | null | transformers | 31,279 | Entry not found |
crystina-z/mdpr-tied-msmarco-pyserini | d98dd05c2b937b33a1a5cd05b3535b3ef8464dae | 2022-04-19T20:52:36.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | crystina-z | null | crystina-z/mdpr-tied-msmarco-pyserini | 1 | null | transformers | 31,280 | Entry not found |
MrBananaHuman/kogpt_medium_wiki | 65eb38d097ce4bc457de91fd4789ecd68fd0ce25 | 2022-04-17T02:06:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | MrBananaHuman | null | MrBananaHuman/kogpt_medium_wiki | 1 | null | transformers | 31,281 | Entry not found |
speydach/layoutlmv2-finetuned-cord | 70d06c90e14d80204a1f2d47ae33b02356bd22a4 | 2022-04-17T02:14:14.000Z | [
"pytorch",
"layoutlmv2",
"token-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | speydach | null | speydach/layoutlmv2-finetuned-cord | 1 | null | transformers | 31,282 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-finetuned-cord
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-cord
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 15
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
|
MrBananaHuman/engpt_medium_to_kogpt_medium_w_freezing | ce0c8271f205b169b897b75a5275dd762a540f6c | 2022-04-17T02:10:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | MrBananaHuman | null | MrBananaHuman/engpt_medium_to_kogpt_medium_w_freezing | 1 | null | transformers | 31,283 | Entry not found |
adnankhawaja/R_T_SMS_LM | 0731e023c05768a05315a60648228f1a7fafcab9 | 2022-04-17T07:19:47.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | adnankhawaja | null | adnankhawaja/R_T_SMS_LM | 1 | null | transformers | 31,284 | Entry not found |
masakhane/m2m100_418M_fr_mos_news | 61d3d03d6964922b2ac6f71c8abd8bc31ffcc2a0 | 2022-04-17T08:15:32.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_fr_mos_news | 1 | null | transformers | 31,285 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_mos_fr_rel_news_ft | 38433a5d36b52b98b189d735cf2280646a6adf36 | 2022-04-17T11:50:10.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_mos_fr_rel_news_ft | 1 | null | transformers | 31,286 | ---
license: afl-3.0
---
|
scasutt/wav2vec2-large-xlsr-53_toy_train_augment_random_noise | b512dcbc2146148c5d139e7023748fe4187f4cdc | 2022-04-17T13:09:25.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | scasutt | null | scasutt/wav2vec2-large-xlsr-53_toy_train_augment_random_noise | 1 | null | transformers | 31,287 | Entry not found |
surafelkindu/AmBERT | ef7fe43be6c962dc8ed448955cb429bd6b9e1a68 | 2022-04-17T20:16:29.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | surafelkindu | null | surafelkindu/AmBERT | 1 | 1 | transformers | 31,288 | ---
license: mit
---
Amharic Language Language Model
#Trained in Roberta architecture |
masakhane/m2m100_418M_mos_fr_rel_ft | 6517438b1df31240dd96d2c007478becc2620f66 | 2022-04-17T11:50:15.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_mos_fr_rel_ft | 1 | null | transformers | 31,289 | ---
license: afl-3.0
---
|
bhagyarana/t5_squad_v1 | d64e0eae3feed3d57ed39314f41e0c0c39c2a8c1 | 2022-04-17T11:09:45.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | bhagyarana | null | bhagyarana/t5_squad_v1 | 1 | null | transformers | 31,290 | Entry not found |
stevems1/distilroberta-base-SmithsModel2 | b1d737ea256e2ca48de758400792dad98f8e4238 | 2022-04-17T11:53:54.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | stevems1 | null | stevems1/distilroberta-base-SmithsModel2 | 1 | null | transformers | 31,291 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-SmithsModel2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-SmithsModel2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.8736 | 1.0 | 3632 | 1.6643 |
| 1.5808 | 2.0 | 7264 | 1.4663 |
| 1.498 | 3.0 | 10896 | 1.4090 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
scasutt/wav2vec2-large-xlsr-53_toy_train_fast_masked_audio | a750d206f0cf1d30fc180aded2281834f55dd606 | 2022-04-17T19:03:46.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | scasutt | null | scasutt/wav2vec2-large-xlsr-53_toy_train_fast_masked_audio | 1 | null | transformers | 31,292 | Entry not found |
leung233/opus-mt-en-zh-finetuned-0-to-1 | 5acef0e2e264ebbc7d6015e490f12a53e87c34ea | 2022-04-18T05:56:25.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | leung233 | null | leung233/opus-mt-en-zh-finetuned-0-to-1 | 1 | null | transformers | 31,293 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: opus-mt-en-zh-finetuned-0-to-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-zh-finetuned-0-to-1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
aaaacash/DialoGPT-large-michaelscott | 9b7c881f4e6c78d481f781757577c2ebfdd40a1f | 2022-04-17T19:47:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | aaaacash | null | aaaacash/DialoGPT-large-michaelscott | 1 | null | transformers | 31,294 | ---
tags:
- conversational
---
# Michael Scott DialoGPT Model |
creynier/wav2vec2-base-swbd-turn-eos-long_utt_removed3 | 59022e465326a2be5ef2fca7a57c0420ad9d5b3c | 2022-04-18T16:24:17.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | creynier | null | creynier/wav2vec2-base-swbd-turn-eos-long_utt_removed3 | 1 | null | transformers | 31,295 | Entry not found |
AntoDono/DialoGPT-Harry | e987de58f1e966af2ff764e84880d3adb201c7a9 | 2022-04-17T21:33:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | AntoDono | null | AntoDono/DialoGPT-Harry | 1 | null | transformers | 31,296 | ---
tags:
- conversational
--- |
danhsf/pegasus-samsum | 567205057a2d5ca4f518f78620db98528b389b58 | 2022-04-17T23:29:38.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"dataset:samsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | danhsf | null | danhsf/pegasus-samsum | 1 | null | transformers | 31,297 | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6936 | 0.54 | 500 | 1.4844 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
PSW/baseline-for-porting-test | a6da863c48285da2f42fb0f9f4a6c520dc74e0d9 | 2022-04-18T01:35:56.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/baseline-for-porting-test | 1 | null | transformers | 31,298 | Entry not found |
BigSalmon/InformalToFormalLincoln38 | 44f47555399d91cd2b1c6cf071886ab252e78e7d | 2022-04-18T03:12:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln38 | 1 | null | transformers | 31,299 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln38")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln38")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.