modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
RAJESHNEMANI/Chatbot_AI
f73492d2bfec4da7479fbc8560dceed945caf072
2022-04-05T21:04:18.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
RAJESHNEMANI
null
RAJESHNEMANI/Chatbot_AI
1
null
transformers
31,100
--- tags: - conversational --- # RickBot built for [Chai](https://chai.ml/) Make your own [here](https://colab.research.google.com/drive/1LtVm-VHvDnfNy7SsbZAqhh49ikBwh1un?usp=sharing)
Danastos/newsqa_bert_el
9e78e8ae708a231d826d667042974119290a4227
2022-04-06T03:31:11.000Z
[ "pytorch", "tensorboard", "bert", "question-answering", "dataset:Danastos/newsqa_el_custom", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
question-answering
false
Danastos
null
Danastos/newsqa_bert_el
1
null
transformers
31,101
--- tags: - generated_from_trainer datasets: - Danastos/newsqa_el_custom model-index: - name: newsqa_bert_el results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # newsqa_bert_el This model is a fine-tuned version of [nlpaueb/bert-base-greek-uncased-v1](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1) on the Danastos/newsqa_el_custom dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.11.0a0+17540c5 - Datasets 2.0.0 - Tokenizers 0.11.6
kumachan/another-dummy-model
2637ebe8036233db2353954735969b16a070fb3b
2022-04-06T00:54:02.000Z
[ "pytorch", "camembert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
kumachan
null
kumachan/another-dummy-model
1
null
transformers
31,102
Entry not found
Jiyang/EditModel
fb7bc218a447f170ed660b3bd006f664ef1b8c58
2022-04-06T03:22:47.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Jiyang
null
Jiyang/EditModel
1
null
transformers
31,103
Entry not found
Kuray107/ls-timit-wsj0-100percent-supervised-aug
49f5668e1e975413809d288224d3647abd5fe9df
2022-04-06T14:26:52.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "model-index" ]
automatic-speech-recognition
false
Kuray107
null
Kuray107/ls-timit-wsj0-100percent-supervised-aug
1
null
transformers
31,104
--- tags: - generated_from_trainer model-index: - name: ls-timit-wsj0-100percent-supervised-aug results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ls-timit-wsj0-100percent-supervised-aug This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0489 - Wer: 0.0275 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3491 | 4.57 | 1000 | 0.0470 | 0.0416 | | 0.1088 | 9.13 | 2000 | 0.0582 | 0.0343 | | 0.0702 | 13.7 | 3000 | 0.0471 | 0.0271 | | 0.0532 | 18.26 | 4000 | 0.0489 | 0.0275 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.2 - Tokenizers 0.10.3
husnu/wav2vec2-large-xls-r-300m-turkish-colab
f93da5738fa37b6fbbb6d94757dc06c67e396476
2022-04-12T16:30:52.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
husnu
null
husnu/wav2vec2-large-xls-r-300m-turkish-colab
1
null
transformers
31,105
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-turkish-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_6.1 dataset. It achieves the following results on the evaluation set: - Loss: 0.4380 - Wer: 0.3508 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.8764 | 3.67 | 400 | 0.7239 | 0.7221 | | 0.4526 | 7.34 | 800 | 0.5009 | 0.5345 | | 0.2169 | 11.01 | 1200 | 0.4728 | 0.4693 | | 0.1438 | 14.68 | 1600 | 0.4648 | 0.4669 | | 0.1095 | 18.35 | 2000 | 0.4642 | 0.4094 | | 0.0893 | 22.02 | 2400 | 0.4749 | 0.3879 | | 0.0701 | 25.69 | 2800 | 0.4410 | 0.3665 | | 0.056 | 29.36 | 3200 | 0.4380 | 0.3508 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
Siddique/wav2vec2-large-xls-r-300m-turkish-colab
51a60b28c78401df5b09926bd234c84669d21e9c
2022-04-06T05:42:59.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
Siddique
null
Siddique/wav2vec2-large-xls-r-300m-turkish-colab
1
null
transformers
31,106
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-turkish-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
chiba/distilbert-base-japanese-finetuned-squad
6ea73e0dd5fc657ac7b5eace2fbc4ceaf7040ce1
2022-04-06T07:12:12.000Z
[ "pytorch", "tensorboard", "distilbert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
chiba
null
chiba/distilbert-base-japanese-finetuned-squad
1
null
transformers
31,107
Entry not found
birgermoell/psst-fairseq-time-shift
068e822daa3fda6609a32f85ac1d441fa91d48bb
2022-04-06T08:50:40.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
birgermoell
null
birgermoell/psst-fairseq-time-shift
1
null
transformers
31,108
Entry not found
edangx100/t5-small-finetuned-wikisql
d4227a7534353c2912e5e6abb6fb8814a47ff715
2022-04-06T10:23:39.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "dataset:wiki_sql", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
edangx100
null
edangx100/t5-small-finetuned-wikisql
1
null
transformers
31,109
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wiki_sql model-index: - name: t5-small-finetuned-wikisql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-wikisql This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wiki_sql dataset. It achieves the following results on the evaluation set: - Loss: 0.1246 - Rouge2 Precision: 0.8187 - Rouge2 Recall: 0.7269 - Rouge2 Fmeasure: 0.7629 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:| | 0.1952 | 1.0 | 4049 | 0.1567 | 0.7948 | 0.7057 | 0.7406 | | 0.167 | 2.0 | 8098 | 0.1382 | 0.8092 | 0.7171 | 0.7534 | | 0.1517 | 3.0 | 12147 | 0.1296 | 0.8145 | 0.7228 | 0.7589 | | 0.1433 | 4.0 | 16196 | 0.1260 | 0.8175 | 0.7254 | 0.7617 | | 0.1414 | 5.0 | 20245 | 0.1246 | 0.8187 | 0.7269 | 0.7629 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
xxr/bert-base-chinese-complaint-128
f2e5bb39d5ab44dd12014fc7a49169613805060f
2022-04-06T11:06:31.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
xxr
null
xxr/bert-base-chinese-complaint-128
1
null
transformers
31,110
--- license: apache-2.0 tags: - generated_from_trainer datasets: - null model_index: - name: bert-base-chinese-complaint-128 results: - task: name: Masked Language Modeling type: fill-mask --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-chinese-complaint-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3004 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.3735 | 1.0 | 1250 | 2.4628 | | 2.2412 | 2.0 | 2500 | 2.0378 | | 1.9251 | 3.0 | 3750 | 1.8368 | | 1.7407 | 4.0 | 5000 | 1.6972 | | 1.6137 | 5.0 | 6250 | 1.5937 | | 1.5365 | 6.0 | 7500 | 1.5315 | | 1.4662 | 7.0 | 8750 | 1.4921 | | 1.3985 | 8.0 | 10000 | 1.4517 | | 1.3509 | 9.0 | 11250 | 1.4308 | | 1.3047 | 10.0 | 12500 | 1.3906 | | 1.2745 | 11.0 | 13750 | 1.3467 | | 1.2377 | 12.0 | 15000 | 1.3306 | | 1.2139 | 13.0 | 16250 | 1.3205 | | 1.2027 | 14.0 | 17500 | 1.3098 | | 1.1722 | 15.0 | 18750 | 1.2845 | | 1.1697 | 16.0 | 20000 | 1.3004 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.7.1 - Datasets 1.16.1 - Tokenizers 0.10.3
NbAiLab/nb-mt5-base
ccec5c6f77b685377b297905f3d7a0618a8a7a4f
2022-04-06T13:53:23.000Z
[ "pytorch", "jax", "t5", "feature-extraction", "transformers", "license:apache-2.0" ]
feature-extraction
false
NbAiLab
null
NbAiLab/nb-mt5-base
1
null
transformers
31,111
--- license: apache-2.0 ---
gary109/wav2vec2-base-timit-demo-colab
2674f41ac7b7605c9191786d989c36c8855bad35
2022-04-12T07:51:46.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
gary109
null
gary109/wav2vec2-base-timit-demo-colab
1
null
transformers
31,112
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4707 - Wer: 0.3411 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4575 | 4.0 | 500 | 1.3367 | 0.9724 | | 0.594 | 8.0 | 1000 | 0.4365 | 0.4390 | | 0.2195 | 12.0 | 1500 | 0.4438 | 0.3955 | | 0.1246 | 16.0 | 2000 | 0.4741 | 0.3707 | | 0.082 | 20.0 | 2500 | 0.4766 | 0.3564 | | 0.0605 | 24.0 | 3000 | 0.4657 | 0.3475 | | 0.0458 | 28.0 | 3500 | 0.4707 | 0.3411 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
binay1999/distilbert-finetuned-ner
8a13cfb03b554f18afb73dd1e7c4a186fcec7bb9
2022-04-06T16:19:17.000Z
[ "pytorch", "distilbert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
binay1999
null
binay1999/distilbert-finetuned-ner
1
null
transformers
31,113
Entry not found
KrishnaAgarwal16/607-project-adversarial
20258d0dab0f3e39b54043e3f6f5f27753d2504b
2022-04-06T18:43:49.000Z
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
KrishnaAgarwal16
null
KrishnaAgarwal16/607-project-adversarial
1
null
transformers
31,114
Model trained for 1 epoch on 1000 examples from the `adversarial_qa` dataset
ucl-snlp-group-11/byt5-base-cryptic-crosswords
b07bf2dc20d130ba731094cc712792cf8c4636c9
2022-04-06T20:58:39.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
ucl-snlp-group-11
null
ucl-snlp-group-11/byt5-base-cryptic-crosswords
1
null
transformers
31,115
Entry not found
ucl-snlp-group-11/t5-large-cryptic-crosswords
ba3431170ddc6fbbbc96b14d17a8b5e89e60fad2
2022-04-06T21:08:27.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
ucl-snlp-group-11
null
ucl-snlp-group-11/t5-large-cryptic-crosswords
1
null
transformers
31,116
Entry not found
Splend1dchan/t5lephone200000-small-squad1024
8003e23f5d0f891390dca0d2a8c83f975fa46b5b
2022-04-07T06:47:50.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Splend1dchan
null
Splend1dchan/t5lephone200000-small-squad1024
1
null
transformers
31,117
Entry not found
gary109/wav2vec2-base-MIR_ST500-demo-colab
ad94e9163c9950b5511229d3410170ea86860f79
2022-04-08T07:48:19.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
gary109
null
gary109/wav2vec2-base-MIR_ST500-demo-colab
1
null
transformers
31,118
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-MIR_ST500-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-MIR_ST500-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.7360 - Wer: 0.9837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 101.0917 | 16.67 | 100 | 18.8979 | 0.8208 | | 15.5054 | 33.33 | 200 | 10.9184 | 0.8208 | | 10.1879 | 50.0 | 300 | 7.6480 | 0.8208 | | 6.777 | 66.67 | 400 | 3.5386 | 1.0 | | 3.0546 | 83.33 | 500 | 2.8794 | 1.0 | | 2.8661 | 100.0 | 600 | 2.8405 | 1.0 | | 2.847 | 116.67 | 700 | 2.8554 | 1.0 | | 2.7661 | 133.33 | 800 | 2.6343 | 1.0 | | 2.3474 | 150.0 | 900 | 2.7464 | 1.0 | | 2.2464 | 166.67 | 1000 | 2.3565 | 1.0 | | 2.207 | 183.33 | 1100 | 2.8854 | 1.0 | | 2.3138 | 200.0 | 1200 | 2.5868 | 1.0 | | 2.259 | 216.67 | 1300 | 2.6530 | 1.0 | | 2.1667 | 233.33 | 1400 | 2.4921 | 1.0 | | 2.1268 | 250.0 | 1500 | 2.5435 | 1.0 | | 2.1089 | 266.67 | 1600 | 2.5444 | 1.0 | | 2.0845 | 283.33 | 1700 | 2.6796 | 1.0 | | 2.0672 | 300.0 | 1800 | 2.5824 | 1.0 | | 2.055 | 316.67 | 1900 | 2.4631 | 1.0 | | 2.0317 | 333.33 | 2000 | 2.5751 | 1.0 | | 2.0141 | 350.0 | 2100 | 2.5627 | 1.0 | | 1.9914 | 366.67 | 2200 | 2.6132 | 1.0 | | 1.9489 | 383.33 | 2300 | 2.7527 | 1.0 | | 1.9146 | 400.0 | 2400 | 2.6121 | 0.9935 | | 1.893 | 416.67 | 2500 | 2.7110 | 0.9902 | | 1.845 | 433.33 | 2600 | 2.7410 | 0.9967 | | 1.8095 | 450.0 | 2700 | 2.7013 | 0.9935 | | 1.7708 | 466.67 | 2800 | 2.7719 | 0.9935 | | 1.7224 | 483.33 | 2900 | 2.7740 | 0.9837 | | 1.6961 | 500.0 | 3000 | 2.7360 | 0.9837 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3
deepspeechvision/wav2vec2hindiasr_thefinal
50e13d217ecf125b60e3f27fe5aca269d4c399bf
2022-04-07T06:13:23.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
deepspeechvision
null
deepspeechvision/wav2vec2hindiasr_thefinal
1
null
transformers
31,119
Entry not found
tau/false_large_rouge_paraNone_sentNone_span0_5_1024_0.3_epoch1
84bcd2a68a5120f64b655a1de4007e621388a8cd
2022-04-07T05:25:49.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
tau
null
tau/false_large_rouge_paraNone_sentNone_span0_5_1024_0.3_epoch1
1
null
transformers
31,120
Entry not found
tau/false_large_random_paraNone_sent0_spanNone_5_1024_0.3_epoch1
d4726bc3db2fffebab965e8519df75aa5811906d
2022-04-07T05:38:53.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
tau
null
tau/false_large_random_paraNone_sent0_spanNone_5_1024_0.3_epoch1
1
null
transformers
31,121
Entry not found
jeremykke/albert-base-v2-finetuned-swag
fe78c6d9cae555bb32e2bc89234ca6693ce069c5
2022-04-07T08:03:49.000Z
[ "pytorch", "tensorboard", "albert", "multiple-choice", "transformers" ]
multiple-choice
false
jeremykke
null
jeremykke/albert-base-v2-finetuned-swag
1
null
transformers
31,122
Entry not found
shunxing1234/GLM
791fac9f109297959f8cdeefd23ba1725152b2fd
2022-04-29T07:34:50.000Z
[ "pytorch", "transformers" ]
null
false
shunxing1234
null
shunxing1234/GLM
1
null
transformers
31,123
Entry not found
guillaumegg/wav2vec2-base-timit-demo
6d56ede107abb5c8d566a775d566ddae87dd233f
2022-04-07T07:18:24.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
guillaumegg
null
guillaumegg/wav2vec2-base-timit-demo
1
null
transformers
31,124
Entry not found
huggingtweets/enginemode11-phoenixstk19-scarbstech
22a84927c46566fde3551f87f67828bbc53905eb
2022-04-07T08:18:46.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/enginemode11-phoenixstk19-scarbstech
1
null
transformers
31,125
--- language: en thumbnail: http://www.huggingtweets.com/enginemode11-phoenixstk19-scarbstech/1649319522056/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/456005573/scarbs_400x400.JPG&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1507753713288589318/5wpnOWkx_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1509794479691157514/u9JrmBtO_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Craig Scarborough & Alpine F1 Team technical updates & EngineMode11</div> <div style="text-align: center; font-size: 14px;">@enginemode11-phoenixstk19-scarbstech</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Craig Scarborough & Alpine F1 Team technical updates & EngineMode11. | Data | Craig Scarborough | Alpine F1 Team technical updates | EngineMode11 | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 2389 | 1555 | | Retweets | 387 | 39 | 65 | | Short tweets | 646 | 334 | 288 | | Tweets kept | 2217 | 2016 | 1202 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/vojhtxh0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @enginemode11-phoenixstk19-scarbstech's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/28cxey7a) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/28cxey7a/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/enginemode11-phoenixstk19-scarbstech') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Brokette/wav2vec2-base-timit-test3
a94f63e89723b967d18f7f402d4fbc3e4915e4e1
2022-04-07T09:47:29.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
Brokette
null
Brokette/wav2vec2-base-timit-test3
1
null
transformers
31,126
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-test3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-test3 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 2.0.1.dev0 - Tokenizers 0.11.6
Brokette/wav2vec2-base-timit-test5
c532434d8470c3cf076b6e5276bb5a356dd0f56e
2022-04-07T10:29:10.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
Brokette
null
Brokette/wav2vec2-base-timit-test5
1
null
transformers
31,127
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-test5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-test5 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 2.0.1.dev0 - Tokenizers 0.11.6
notexist/tttw
7a90edb23a96e751f74fc5c20418f444b6247d3d
2022-04-07T10:21:27.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
notexist
null
notexist/tttw
1
null
transformers
31,128
Entry not found
swagat-panda/multilingual-pos-tagger-language-detection-indian-context-muril
c283acca0ace877f95e650e92e21e2852e77065f
2022-04-07T14:55:12.000Z
[ "pytorch", "bert", "transformers" ]
null
false
swagat-panda
null
swagat-panda/multilingual-pos-tagger-language-detection-indian-context-muril
1
null
transformers
31,129
Entry not found
vocab-transformers/distilbert-word2vec_256k-MLM_250k
136548ac0a4e1b1436c0dec90609fce938276798
2022-04-07T12:46:40.000Z
[ "pytorch", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
vocab-transformers
null
vocab-transformers/distilbert-word2vec_256k-MLM_250k
1
null
transformers
31,130
# DistilBERT with word2vec token embeddings This model has a word2vec token embedding matrix with 256k entries. The word2vec was trained on 100GB data from C4, MSMARCO, News, Wikipedia, S2ORC, for 3 epochs. Then the model was trained on this dataset with MLM for 250k steps (batch size 64). The token embeddings were NOT updated.
vocab-transformers/distilbert-word2vec_256k-MLM_1M
7bbf38d7e54685bc016a289b4238de7dfb8e2d95
2022-04-07T13:00:01.000Z
[ "pytorch", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
vocab-transformers
null
vocab-transformers/distilbert-word2vec_256k-MLM_1M
1
null
transformers
31,131
# DistilBERT with word2vec token embeddings This model has a word2vec token embedding matrix with 256k entries. The word2vec was trained on 100GB data from C4, MSMARCO, News, Wikipedia, S2ORC, for 3 epochs. Then the model was trained on this dataset with MLM for 1M steps (batch size 64). The token embeddings were NOT updated.
BigSalmon/MediumInformalToFormalLincoln2
b7556b742280633ed9c767d4654adff4ada20915
2022-04-07T17:15:03.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
BigSalmon
null
BigSalmon/MediumInformalToFormalLincoln2
1
null
transformers
31,132
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/MediumInformalToFormalLincoln2") model = AutoModelForCausalLM.from_pretrained("BigSalmon/MediumInformalToFormalLincoln2") ``` ``` - moviepass to return - this summer - swooped up by - original co-founder stacy spikes text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes. *** - middle schools do not have recess - should get back to doing it - amazing for communication - and getting kids to move around text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity. *** - ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln35") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln35") ``` ``` - moviepass to return - this summer - swooped up by - original co-founder stacy spikes text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes. *** - middle schools do not have recess - should get back to doing it - amazing for communication - and getting kids to move around text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity. *** - ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` (makes one sentence, two sentences) (probably will not work all that well) ``` entry: an upsurge in public interest in astronomy accompanied nasa's stellar picture of a starry night. extended: public interest in astronomy soared. not coincidentally, this was concurrent with nasa's release of a phenomenal image of a starry night. *** entry: ``` (makes two sentences, one sentence) (probably will not work all that well) ``` initial: phone books used to be everywhere. they have been replaced by the internet. combined: once ubiquitous, phone books have been supplanted by the internet. *** initial: ``` ``` what are the drawbacks of living near an airbnb? □ noise □ parking □ traffic □ security □ strangers *** ``` Keywords to sentences or sentence.
ali-issa/wav2vec2-Arabizi
c2304f7109c5e05cdb6f1f12a3407f42e7c4e2c6
2022-04-07T18:51:18.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
ali-issa
null
ali-issa/wav2vec2-Arabizi
1
null
transformers
31,133
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-Arabizi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-Arabizi This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6433 - Wer: 0.8331 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.7105 | 10.0 | 200 | 2.9462 | 1.0 | | 1.9532 | 20.0 | 400 | 1.4871 | 0.8887 | | 0.3542 | 30.0 | 600 | 1.6433 | 0.8331 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
lucypallent/distilbert-base-uncased-finetuned-imdb
ed86ef0ffb0ad60b8369dd2763b97d7553aec7a4
2022-04-07T18:27:42.000Z
[ "pytorch", "tensorboard", "distilbert", "fill-mask", "dataset:imdb", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
lucypallent
null
lucypallent/distilbert-base-uncased-finetuned-imdb
1
null
transformers
31,134
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4718 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.707 | 1.0 | 157 | 2.4883 | | 2.572 | 2.0 | 314 | 2.4240 | | 2.5377 | 3.0 | 471 | 2.4355 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
cj-mills/xlm-roberta-base-finetuned-panx-de
1ebdc3c9051a980588be5a495ad96896f330932c
2022-04-08T01:29:12.000Z
[ "pytorch", "tensorboard", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
cj-mills
null
cj-mills/xlm-roberta-base-finetuned-panx-de
1
null
transformers
31,135
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8575809199318569 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1319 - F1: 0.8576 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3264 | 1.0 | 197 | 0.1623 | 0.8139 | | 0.136 | 2.0 | 394 | 0.1331 | 0.8451 | | 0.096 | 3.0 | 591 | 0.1319 | 0.8576 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
Bistolero/german_all
3e20912fd78f18ac878c372af5bec898ea71a02f
2022-04-07T21:56:55.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Bistolero
null
Bistolero/german_all
1
null
transformers
31,136
Entry not found
srmukundb/distilbert-base-uncased-finetuned-squad
2d379798cb3754ed0b9b9ff5ae913bce5d9afd98
2022-04-08T12:08:02.000Z
[ "pytorch", "tensorboard", "distilbert", "question-answering", "dataset:squad_v2", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
srmukundb
null
srmukundb/distilbert-base-uncased-finetuned-squad
1
null
transformers
31,137
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.4104 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2182 | 1.0 | 8235 | 1.2318 | | 0.9451 | 2.0 | 16470 | 1.2693 | | 0.7554 | 3.0 | 24705 | 1.4104 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
cj-mills/xlm-roberta-base-finetuned-panx-de-fr
1c43332ab7b11485f33f1b022bce0388311963ab
2022-04-08T01:41:15.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
cj-mills
null
cj-mills/xlm-roberta-base-finetuned-panx-de-fr
1
null
transformers
31,138
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1580 - F1: 0.8547 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3718 | 1.0 | 269 | 0.1761 | 0.8223 | | 0.1535 | 2.0 | 538 | 0.1608 | 0.8404 | | 0.1074 | 3.0 | 807 | 0.1580 | 0.8547 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
cj-mills/xlm-roberta-base-finetuned-panx-en
100d385d3ee5acb6ced4732d37d5611f0040d3c2
2022-04-08T02:04:02.000Z
[ "pytorch", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
cj-mills
null
cj-mills/xlm-roberta-base-finetuned-panx-en
1
null
transformers
31,139
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.en metrics: - name: F1 type: f1 value: 0.5793693212185996 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.5084 - F1: 0.5794 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.7119 | 1.0 | 19 | 1.0009 | 0.2266 | | 0.891 | 2.0 | 38 | 0.6405 | 0.5281 | | 0.6023 | 3.0 | 57 | 0.5084 | 0.5794 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
cj-mills/xlm-roberta-base-finetuned-panx-all
3e2e6513ddfcdf6caa7fa00f0c68cdfdba0be13e
2022-04-08T02:13:57.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
cj-mills
null
cj-mills/xlm-roberta-base-finetuned-panx-all
1
null
transformers
31,140
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1674 - F1: 0.8477 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3701 | 1.0 | 313 | 0.2000 | 0.8054 | | 0.1629 | 2.0 | 626 | 0.1680 | 0.8378 | | 0.1156 | 3.0 | 939 | 0.1674 | 0.8477 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
MrYiRen/DialoGPT-small-ZC
52bdf0ea19d337236e4a4625a18afc554f891c40
2022-04-08T02:35:01.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
MrYiRen
null
MrYiRen/DialoGPT-small-ZC
1
null
transformers
31,141
--- tags: - conversational --- # Harry Potter2 DialoGPT Model
Pisit/wave2vec2-front
89544fb15a938a894c903c1f3230debe2d1a120a
2022-04-08T06:04:26.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
Pisit
null
Pisit/wave2vec2-front
1
null
transformers
31,142
Entry not found
guzelgun/dummy-model
b208975d9899e19a6b52874c74dabdc6f5e4715f
2022-04-08T05:47:40.000Z
[ "pytorch", "camembert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
guzelgun
null
guzelgun/dummy-model
1
null
transformers
31,143
Entry not found
chiba/electra-small-japanese-generator_test
9cc3505df203c44d50e4b140725b10c9ce05226c
2022-04-12T04:55:24.000Z
[ "pytorch", "electra", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
chiba
null
chiba/electra-small-japanese-generator_test
1
null
transformers
31,144
Entry not found
Falia/wav2vec2-xlsr-300m-vox_mg
7abba62506d6c21da994343c507ea88ef2f92e90
2022-06-11T12:31:13.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
Falia
null
Falia/wav2vec2-xlsr-300m-vox_mg
1
null
transformers
31,145
Entry not found
kiana/distilbert-base-uncased-finetuned-squad
03357077abb5110045e1e8c53c3cc5eaf607e166
2022-04-23T11:54:15.000Z
[ "pytorch", "tensorboard", "distilbert", "question-answering", "dataset:squad_v2", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
kiana
null
kiana/distilbert-base-uncased-finetuned-squad
1
null
transformers
31,146
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.4088 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2545 | 1.0 | 8235 | 1.2770 | | 0.9861 | 2.0 | 16470 | 1.3071 | | 0.8098 | 3.0 | 24705 | 1.4088 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
edwardjross/xlm-roberta-base-finetuned-recipe-ar
f72a9b9b1bbf8805e4e32bb496bf131ed81202af
2022-04-09T02:14:30.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
edwardjross
null
edwardjross/xlm-roberta-base-finetuned-recipe-ar
1
null
transformers
31,147
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-recipe-ar results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-recipe-ar This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0529 - F1: 0.9856 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4605 | 1.0 | 74 | 0.1084 | 0.9609 | | 0.1105 | 2.0 | 148 | 0.0563 | 0.9809 | | 0.0696 | 3.0 | 222 | 0.0500 | 0.9851 | | 0.0512 | 4.0 | 296 | 0.0529 | 0.9856 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.1 - Datasets 1.18.4 - Tokenizers 0.11.6
edwardjross/xlm-roberta-base-finetuned-recipe-gk
c55bb323f8ee5a74d00746cf62608ce17a18d5f8
2022-04-09T02:23:00.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
edwardjross
null
edwardjross/xlm-roberta-base-finetuned-recipe-gk
1
null
transformers
31,148
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-recipe-gk results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-recipe-gk This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1505 - F1: 0.9536 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.292 | 1.0 | 258 | 0.1525 | 0.9565 | | 0.1231 | 2.0 | 516 | 0.1348 | 0.9619 | | 0.0787 | 3.0 | 774 | 0.1408 | 0.9607 | | 0.0655 | 4.0 | 1032 | 0.1505 | 0.9536 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.1 - Datasets 1.18.4 - Tokenizers 0.11.6
johnpaulbin/skript-1m-gpt-neo350m
a55020c5ea0277f617f70334023f82ac279a266b
2022-04-08T14:21:41.000Z
[ "pytorch", "gpt_neo", "text-generation", "transformers" ]
text-generation
false
johnpaulbin
null
johnpaulbin/skript-1m-gpt-neo350m
1
null
transformers
31,149
Entry not found
AvengingPrime/Change-My-View-Model-1
4986625c803a87733af9c0badacd9b32f65fd317
2022-04-08T16:40:51.000Z
[ "pytorch", "encoder-decoder", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
AvengingPrime
null
AvengingPrime/Change-My-View-Model-1
1
null
transformers
31,150
Entry not found
bmichele/poetry-generation-nextline-mbart-gut-en-single
ae22a9b7d6dffbe689b7adfeba971f079cd5622e
2022-04-08T19:13:43.000Z
[ "pytorch", "mbart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
bmichele
null
bmichele/poetry-generation-nextline-mbart-gut-en-single
1
null
transformers
31,151
# poetry-generation-nextline-mbart-gut-en-single * `nextline`: generates a poem line from previous line(s) * `mbart`: base model is [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) * `gut`: trained on Project Gutenberg data * `en`: English language * `single`: uses only last poem line as input for generation
Danastos/squad_bert_el
a583b29f3ccbec575c54195166932f9f8ef2ece3
2022-04-09T02:25:43.000Z
[ "pytorch", "tensorboard", "bert", "question-answering", "dataset:Danastos/squad_el_custom", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
question-answering
false
Danastos
null
Danastos/squad_bert_el
1
null
transformers
31,152
--- tags: - generated_from_trainer datasets: - Danastos/squad_el_custom model-index: - name: squad_bert_el results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # squad_bert_el This model is a fine-tuned version of [nlpaueb/bert-base-greek-uncased-v1](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1) on the Danastos/squad_el_custom dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0a0+17540c5 - Datasets 2.0.0 - Tokenizers 0.11.6
Wizounovziki/t5-small-finetuned-xsum
b85212c03bbdb72dc6d30a55ca7f23659c9b0335
2022-04-09T09:24:06.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
Wizounovziki
null
Wizounovziki/t5-small-finetuned-xsum
1
null
transformers
31,153
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-small-finetuned-xsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 13 | 2.9185 | 20.6059 | 0.7473 | 20.5288 | 20.5999 | 18.87 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
Wizounovziki/t5-small-ipad-sum
a958114c4068f04a4b8b9875c3e96da382b26a0d
2022-04-09T10:44:23.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
Wizounovziki
null
Wizounovziki/t5-small-ipad-sum
1
null
transformers
31,154
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: t5-small-ipad-sum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-ipad-sum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3632 - Rouge1: 90.6 - Rouge2: 29.6667 - Rougel: 90.8667 - Rougelsum: 90.6667 - Gen Len: 4.79 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 13 | 2.7713 | 20.7123 | 0.7601 | 20.6467 | 20.6954 | 18.85 | | No log | 2.0 | 26 | 1.9722 | 23.2307 | 1.3571 | 23.263 | 23.2952 | 18.25 | | No log | 3.0 | 39 | 1.2886 | 46.3724 | 8.0862 | 46.5163 | 46.4406 | 13.7 | | No log | 4.0 | 52 | 0.8267 | 78.4825 | 14.1975 | 78.6464 | 78.3548 | 7.38 | | No log | 5.0 | 65 | 0.6405 | 81.8222 | 15.7532 | 81.8856 | 81.88 | 6.3 | | No log | 6.0 | 78 | 0.5210 | 83.2111 | 17.5 | 83.2931 | 83.1583 | 5.46 | | No log | 7.0 | 91 | 0.4425 | 87.154 | 21.7917 | 87.2008 | 87.169 | 4.99 | | No log | 8.0 | 104 | 0.3974 | 89.7619 | 27.6667 | 89.8571 | 89.8817 | 4.85 | | No log | 9.0 | 117 | 0.3735 | 90.4 | 29.6667 | 90.5706 | 90.4635 | 4.87 | | No log | 10.0 | 130 | 0.3632 | 90.6 | 29.6667 | 90.8667 | 90.6667 | 4.79 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
bhoppenstedt/js-fakes-4bars
4e04484fe39e6bf890dc52efd081670dbd20e430
2022-04-09T12:36:45.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
bhoppenstedt
null
bhoppenstedt/js-fakes-4bars
1
null
transformers
31,155
Entry not found
Bogula/js-fakes-4bars
93205512bf3a3fc45059f5e71f06cd40c07ec4f9
2022-04-09T12:39:38.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
Bogula
null
Bogula/js-fakes-4bars
1
null
transformers
31,156
Entry not found
DarrellTimothy/DialoGPT-small-harrypotter
89cf5831d8d40ea09e0902227fc0276b19f631ac
2022-04-09T12:50:34.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
DarrellTimothy
null
DarrellTimothy/DialoGPT-small-harrypotter
1
null
transformers
31,157
--- tags: - conversational --- # Harry Potter DialoGPT Model
tau/tavbert-tr
e5cc769220f4bd9e2bd353839c157a0355cb1fd7
2022-04-09T12:55:55.000Z
[ "pytorch", "roberta", "fill-mask", "tr", "dataset:oscar", "transformers", "language model", "autotrain_compatible" ]
fill-mask
false
tau
null
tau/tavbert-tr
1
1
transformers
31,158
--- language: tr tags: - roberta - language model datasets: - oscar --- # TavBERT base model A Turkish BERT-style masked language model operating over characters, pre-trained by masking spans of characters, similarly to SpanBERT (Joshi et al., 2020). ### How to use ```python import numpy as np import torch from transformers import AutoModelForMaskedLM, AutoTokenizer model = AutoModelForMaskedLM.from_pretrained("tau/tavbert-tr") tokenizer = AutoTokenizer.from_pretrained("tau/tavbert-tr") def mask_sentence(sent, span_len=5): start_pos = np.random.randint(0, len(sent) - span_len) masked_sent = sent[:start_pos] + '[MASK]' * span_len + sent[start_pos + span_len:] print("Masked sentence:", masked_sent) output = model(**tokenizer.encode_plus(masked_sent, return_tensors='pt'))['logits'][0][1:-1] preds = [int(x) for x in torch.argmax(torch.softmax(output, axis=1), axis=1)[start_pos:start_pos + span_len]] pred_sent = sent[:start_pos] + ''.join(tokenizer.convert_ids_to_tokens(preds)) + sent[start_pos + span_len:] print("Model's prediction:", pred_sent) ``` ## Training data OSCAR (Ortiz, 2019) Turkish section (27 GB text, 77 million sentences).
tau/tavbert-ar
96b589cd0539801fb7680d11837a65deefc9b0e8
2022-04-09T13:27:47.000Z
[ "pytorch", "roberta", "fill-mask", "ar", "dataset:oscar", "transformers", "language model", "autotrain_compatible" ]
fill-mask
false
tau
null
tau/tavbert-ar
1
null
transformers
31,159
--- language: ar tags: - roberta - language model datasets: - oscar --- # TavBERT base model An Arabic BERT-style masked language model operating over characters, pre-trained by masking spans of characters, similarly to SpanBERT (Joshi et al., 2020). ### How to use ```python import numpy as np import torch from transformers import AutoModelForMaskedLM, AutoTokenizer model = AutoModelForMaskedLM.from_pretrained("tau/tavbert-ar") tokenizer = AutoTokenizer.from_pretrained("tau/tavbert-ar") def mask_sentence(sent, span_len=5): start_pos = np.random.randint(0, len(sent) - span_len) masked_sent = sent[:start_pos] + '[MASK]' * span_len + sent[start_pos + span_len:] print("Masked sentence:", masked_sent) output = model(**tokenizer.encode_plus(masked_sent, return_tensors='pt'))['logits'][0][1:-1] preds = [int(x) for x in torch.argmax(torch.softmax(output, axis=1), axis=1)[start_pos:start_pos + span_len]] pred_sent = sent[:start_pos] + ''.join(tokenizer.convert_ids_to_tokens(preds)) + sent[start_pos + span_len:] print("Model's prediction:", pred_sent) ``` ## Training data OSCAR (Ortiz, 2019) Arabic section (32 GB text, 67 million sentences).
masakhane/afrimbart_bam_fr_news
30c034a9bdf8fcb161f17f387b426b024dde4f99
2022-04-11T13:18:47.000Z
[ "pytorch", "mbart", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/afrimbart_bam_fr_news
1
null
transformers
31,160
--- license: afl-3.0 ---
masakhane/afrimbart_fr_bam_news
5d7cc3646cb00468a56e331a11ad8b577addaf0e
2022-04-11T13:20:11.000Z
[ "pytorch", "mbart", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/afrimbart_fr_bam_news
1
null
transformers
31,161
--- license: afl-3.0 ---
masakhane/afrimt5_bam_fr_news
71ed3412f7c5245ba308a9faee38fb6d9257a48f
2022-04-11T13:27:50.000Z
[ "pytorch", "mt5", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/afrimt5_bam_fr_news
1
null
transformers
31,162
--- license: afl-3.0 ---
masakhane/afrimt5_fr_bam_news
237547917e1534d1d76b8b301bce30416d8dfd66
2022-04-11T13:27:55.000Z
[ "pytorch", "mt5", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/afrimt5_fr_bam_news
1
null
transformers
31,163
--- license: afl-3.0 ---
masakhane/afribyt5_fr_bam_news
2d686120482b50aa44623bcfb51ceddd610ff057
2022-04-11T13:34:08.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/afribyt5_fr_bam_news
1
null
transformers
31,164
--- license: afl-3.0 ---
masakhane/byt5_fr_bam_news
7e9db3c1dff8a40ba10e8187bb645bf4485b9449
2022-04-11T13:41:42.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/byt5_fr_bam_news
1
null
transformers
31,165
--- license: afl-3.0 ---
gemasphi/laprador-query-encoder
a6f20bd4d1426059939be467bd9520d4288d0201
2022-04-09T18:28:12.000Z
[ "pytorch", "bert", "feature-extraction", "sentence-transformers", "sentence-similarity", "transformers" ]
sentence-similarity
false
gemasphi
null
gemasphi/laprador-query-encoder
1
null
sentence-transformers
31,166
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
masakhane/m2m100_418M_bam_fr_rel_news_ft
bc982be439d141352a317b7e672df752a61f4486
2022-04-11T15:12:35.000Z
[ "pytorch", "m2m_100", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/m2m100_418M_bam_fr_rel_news_ft
1
null
transformers
31,167
--- license: afl-3.0 ---
masakhane/m2m100_418M_fr_bam_news
96eca872cd0d4750fbe83908c4c8f27dd5197d73
2022-04-11T14:31:00.000Z
[ "pytorch", "m2m_100", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/m2m100_418M_fr_bam_news
1
null
transformers
31,168
--- license: afl-3.0 ---
masakhane/m2m100_418M_bam_fr_news
28666b71f8480b1819acdfdd9ecbbce7992407b2
2022-04-11T14:30:55.000Z
[ "pytorch", "m2m_100", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/m2m100_418M_bam_fr_news
1
null
transformers
31,169
--- license: afl-3.0 ---
masakhane/m2m100_418M_fr_bam_rel
56249c7552b4077af74741bb810f042cf2294179
2022-04-11T15:21:09.000Z
[ "pytorch", "m2m_100", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/m2m100_418M_fr_bam_rel
1
null
transformers
31,170
--- license: afl-3.0 ---
masakhane/m2m100_418M_bam_fr_rel_ft
e57c20fb6673fa66327d194ab1d16f41c6948452
2022-04-11T16:34:16.000Z
[ "pytorch", "m2m_100", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/m2m100_418M_bam_fr_rel_ft
1
null
transformers
31,171
--- license: afl-3.0 ---
masakhane/mbart50_fr_bam_news
97ed4b3c7233b63a53f9c50f75ac1e28b179bc81
2022-04-11T14:22:35.000Z
[ "pytorch", "mbart", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/mbart50_fr_bam_news
1
null
transformers
31,172
--- license: afl-3.0 ---
masakhane/mt5_bam_fr_news
35329817cb58a179ac855c72d0a81afcb29a92f1
2022-04-11T13:53:53.000Z
[ "pytorch", "mt5", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/mt5_bam_fr_news
1
null
transformers
31,173
--- license: afl-3.0 ---
Chikashi/t5-small-finetuned-wikihow_3epoch_b4_lr3e-5
2b5b759528e7769d1f5c42aa4b16f0dd764bc746
2022-04-10T23:42:14.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "dataset:wikihow", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
Chikashi
null
Chikashi/t5-small-finetuned-wikihow_3epoch_b4_lr3e-5
1
null
transformers
31,174
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikihow metrics: - rouge model-index: - name: t5-small-finetuned-wikihow_3epoch_b4_lr3e-5 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wikihow type: wikihow args: all metrics: - name: Rouge1 type: rouge value: 26.1071 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-wikihow_3epoch_b4_lr3e-5 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset. It achieves the following results on the evaluation set: - Loss: 2.4351 - Rouge1: 26.1071 - Rouge2: 9.3627 - Rougel: 22.0825 - Rougelsum: 25.4514 - Gen Len: 18.474 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.9216 | 0.13 | 5000 | 2.6385 | 23.8039 | 7.8863 | 20.0109 | 23.0802 | 18.3481 | | 2.8158 | 0.25 | 10000 | 2.5884 | 24.2567 | 8.2003 | 20.438 | 23.5325 | 18.3833 | | 2.7743 | 0.38 | 15000 | 2.5623 | 24.8471 | 8.3768 | 20.8711 | 24.1114 | 18.2901 | | 2.7598 | 0.51 | 20000 | 2.5368 | 25.1566 | 8.6721 | 21.1896 | 24.4558 | 18.3561 | | 2.7192 | 0.64 | 25000 | 2.5220 | 25.3477 | 8.8106 | 21.3799 | 24.6742 | 18.3108 | | 2.7207 | 0.76 | 30000 | 2.5114 | 25.5912 | 8.998 | 21.5508 | 24.9344 | 18.3445 | | 2.7041 | 0.89 | 35000 | 2.4993 | 25.457 | 8.8644 | 21.4516 | 24.7965 | 18.4354 | | 2.687 | 1.02 | 40000 | 2.4879 | 25.5886 | 8.9766 | 21.6794 | 24.9512 | 18.4035 | | 2.6652 | 1.14 | 45000 | 2.4848 | 25.7367 | 9.078 | 21.7096 | 25.0924 | 18.4328 | | 2.6536 | 1.27 | 50000 | 2.4761 | 25.7368 | 9.1609 | 21.729 | 25.0866 | 18.3117 | | 2.6589 | 1.4 | 55000 | 2.4702 | 25.7738 | 9.1413 | 21.7492 | 25.114 | 18.4862 | | 2.6384 | 1.53 | 60000 | 2.4620 | 25.7433 | 9.1356 | 21.8198 | 25.0896 | 18.489 | | 2.6337 | 1.65 | 65000 | 2.4595 | 26.0919 | 9.2605 | 21.9447 | 25.4065 | 18.4083 | | 2.6375 | 1.78 | 70000 | 2.4557 | 26.0912 | 9.3469 | 22.0182 | 25.4428 | 18.4133 | | 2.6441 | 1.91 | 75000 | 2.4502 | 26.1366 | 9.3143 | 22.058 | 25.4673 | 18.4972 | | 2.6276 | 2.03 | 80000 | 2.4478 | 25.9929 | 9.2464 | 21.9271 | 25.3263 | 18.469 | | 2.6062 | 2.16 | 85000 | 2.4467 | 26.0465 | 9.3166 | 22.0342 | 25.3998 | 18.3777 | | 2.6126 | 2.29 | 90000 | 2.4407 | 26.1953 | 9.3848 | 22.1148 | 25.5161 | 18.467 | | 2.6182 | 2.42 | 95000 | 2.4397 | 26.1331 | 9.3626 | 22.1076 | 25.4627 | 18.4413 | | 2.6041 | 2.54 | 100000 | 2.4375 | 26.1301 | 9.3567 | 22.0869 | 25.465 | 18.4929 | | 2.5996 | 2.67 | 105000 | 2.4367 | 26.0956 | 9.3314 | 22.063 | 25.4242 | 18.5074 | | 2.6144 | 2.8 | 110000 | 2.4355 | 26.1764 | 9.4157 | 22.1231 | 25.5175 | 18.4729 | | 2.608 | 2.93 | 115000 | 2.4351 | 26.1071 | 9.3627 | 22.0825 | 25.4514 | 18.474 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
cbgbcbcg/DialoGPT-small-joshua
d09854cee8296ee7800f7c93822b796deb5bc30e
2022-04-10T01:42:29.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
cbgbcbcg
null
cbgbcbcg/DialoGPT-small-joshua
1
null
transformers
31,175
Test
ivalig94/Robertweet-large
31161fe9bc995e9a399b5e565e5baf6c65f3ee35
2022-05-05T19:47:05.000Z
[ "pytorch", "roberta", "transformers", "license:afl-3.0" ]
null
false
ivalig94
null
ivalig94/Robertweet-large
1
null
transformers
31,176
--- license: afl-3.0 --- from transformers import AutoTokenizer, ROBERTAClassifier tokenizer = AutoTokenizer.from_pretrained("ivalig94/Robertweet-large") model = ROBERTAClassifier.from_pretrained("ivalig94/Robertweet-large")
Wizounovziki/t5-base-devices-sum-ver2
d2408c2341b8ba37b46f2804efb498570795a0f8
2022-04-10T02:32:23.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
Wizounovziki
null
Wizounovziki/t5-base-devices-sum-ver2
1
null
transformers
31,177
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: t5-base-devices-sum-ver2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-devices-sum-ver2 This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1919 - Rouge1: 95.2959 - Rouge2: 72.5788 - Rougel: 95.292 - Rougelsum: 95.3437 - Gen Len: 4.5992 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 91 | 0.4308 | 87.5009 | 61.4165 | 87.6082 | 87.6628 | 4.3897 | | No log | 2.0 | 182 | 0.2945 | 91.7111 | 66.9023 | 91.706 | 91.7348 | 4.4965 | | No log | 3.0 | 273 | 0.2515 | 93.0416 | 68.8046 | 93.063 | 93.0907 | 4.516 | | No log | 4.0 | 364 | 0.2259 | 94.2097 | 70.862 | 94.2438 | 94.2767 | 4.6283 | | No log | 5.0 | 455 | 0.2148 | 94.7732 | 71.4693 | 94.78 | 94.8274 | 4.5936 | | 0.4603 | 6.0 | 546 | 0.2030 | 95.0207 | 71.7789 | 95.0212 | 95.0887 | 4.5798 | | 0.4603 | 7.0 | 637 | 0.1964 | 95.1482 | 72.3333 | 95.1651 | 95.202 | 4.6227 | | 0.4603 | 8.0 | 728 | 0.1929 | 95.3279 | 72.551 | 95.3459 | 95.3972 | 4.5825 | | 0.4603 | 9.0 | 819 | 0.1935 | 95.2413 | 72.5801 | 95.2372 | 95.3121 | 4.5992 | | 0.4603 | 10.0 | 910 | 0.1919 | 95.2959 | 72.5788 | 95.292 | 95.3437 | 4.5992 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
Splend1dchan/t5-small-squad
9abfef429f6010755bfc512399e03f2d4549c2a3
2022-04-10T07:14:00.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Splend1dchan
null
Splend1dchan/t5-small-squad
1
null
transformers
31,178
Entry not found
V3RX2000/xlm-roberta-base-finetuned-panx-de
3efb3aacfb4bf59870a2c83591537b7f5ed3d02f
2022-04-10T15:13:04.000Z
[ "pytorch", "tensorboard", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
V3RX2000
null
V3RX2000/xlm-roberta-base-finetuned-panx-de
1
null
transformers
31,179
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8590909090909091 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1380 - F1: 0.8591 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2642 | 1.0 | 525 | 0.1624 | 0.8251 | | 0.1315 | 2.0 | 1050 | 0.1445 | 0.8508 | | 0.0832 | 3.0 | 1575 | 0.1380 | 0.8591 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
linyi/chirowm
f4c45b547a26b70ff294fd2085658e55e8bd87a4
2022-04-11T03:56:49.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
linyi
null
linyi/chirowm
1
null
transformers
31,180
Entry not found
krinal214/bert-all-squad_ben_tel_context
64cbb44696f7f6dd111f95cd6cc6dff63bc937e3
2022-04-10T15:06:18.000Z
[ "pytorch", "tensorboard", "bert", "question-answering", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
krinal214
null
krinal214/bert-all-squad_ben_tel_context
1
null
transformers
31,181
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-all-squad_ben_tel_context results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-all-squad_ben_tel_context This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5393 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.996 | 1.0 | 12676 | 0.5393 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.1 - Datasets 1.18.4 - Tokenizers 0.11.6
V3RX2000/xlm-roberta-base-finetuned-panx-de-fr
33eab6b1472fff3584f12bbad5ff6d7302f958fc
2022-04-10T15:31:08.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
V3RX2000
null
V3RX2000/xlm-roberta-base-finetuned-panx-de-fr
1
null
transformers
31,182
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1667 - F1: 0.8582 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2885 | 1.0 | 715 | 0.1817 | 0.8287 | | 0.1497 | 2.0 | 1430 | 0.1618 | 0.8442 | | 0.0944 | 3.0 | 2145 | 0.1667 | 0.8582 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
V3RX2000/xlm-roberta-base-finetuned-panx-it
ed2c1ea485348a0265dfed973f135be7b2f9f3b8
2022-04-10T15:46:48.000Z
[ "pytorch", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
V3RX2000
null
V3RX2000/xlm-roberta-base-finetuned-panx-it
1
null
transformers
31,183
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.it metrics: - name: F1 type: f1 value: 0.822805578342904 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2323 - F1: 0.8228 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8126 | 1.0 | 70 | 0.3361 | 0.7231 | | 0.2995 | 2.0 | 140 | 0.2526 | 0.8079 | | 0.1865 | 3.0 | 210 | 0.2323 | 0.8228 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
V3RX2000/xlm-roberta-base-finetuned-panx-en
df16f72245d69c537487e494cbbc475edfe70e0e
2022-04-10T15:53:36.000Z
[ "pytorch", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
V3RX2000
null
V3RX2000/xlm-roberta-base-finetuned-panx-en
1
null
transformers
31,184
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.en metrics: - name: F1 type: f1 value: 0.7075365579302588 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3925 - F1: 0.7075 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1493 | 1.0 | 50 | 0.5884 | 0.4748 | | 0.5135 | 2.0 | 100 | 0.4088 | 0.6623 | | 0.3558 | 3.0 | 150 | 0.3925 | 0.7075 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
danhsf/xlm-roberta-base-finetuned-panx-de-fr
0d554c12dfbad4306898f0bd1bae092841918dbe
2022-04-10T18:21:26.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
danhsf
null
danhsf/xlm-roberta-base-finetuned-panx-de-fr
1
null
transformers
31,185
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1667 - F1: 0.8582 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2885 | 1.0 | 715 | 0.1817 | 0.8287 | | 0.1497 | 2.0 | 1430 | 0.1618 | 0.8442 | | 0.0944 | 3.0 | 2145 | 0.1667 | 0.8582 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
tonyalves/local_dataset
2ac3dcd83b26dbac0c4d94a9e99d984a8a1bcaa9
2022-04-10T22:23:55.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "transformers", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
tonyalves
null
tonyalves/local_dataset
1
null
transformers
31,186
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer model-index: - name: local_dataset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # local_dataset This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PT dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.9.1+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
Chikashi/t5-small-finetuned-wikihow_3epoch_b8_lr3e-3
d0f70b4b9a08c69a23626df6f2b87f163843f6ff
2022-04-11T08:17:07.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "dataset:wikihow", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
Chikashi
null
Chikashi/t5-small-finetuned-wikihow_3epoch_b8_lr3e-3
1
null
transformers
31,187
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikihow metrics: - rouge model-index: - name: t5-small-finetuned-wikihow_3epoch_b8_lr3e-3 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wikihow type: wikihow args: all metrics: - name: Rouge1 type: rouge value: 27.1711 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-wikihow_3epoch_b8_lr3e-3 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset. It achieves the following results on the evaluation set: - Loss: 2.3163 - Rouge1: 27.1711 - Rouge2: 10.6296 - Rougel: 23.206 - Rougelsum: 26.4801 - Gen Len: 18.5433 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 3.0734 | 0.25 | 5000 | 2.7884 | 22.4825 | 7.2492 | 19.243 | 21.9167 | 18.0616 | | 2.9201 | 0.51 | 10000 | 2.7089 | 24.0869 | 8.0348 | 20.4814 | 23.4541 | 18.5994 | | 2.8403 | 0.76 | 15000 | 2.6390 | 24.62 | 8.3776 | 20.8736 | 23.9784 | 18.4676 | | 2.7764 | 1.02 | 20000 | 2.5943 | 24.1504 | 8.3933 | 20.8271 | 23.5382 | 18.4078 | | 2.6641 | 1.27 | 25000 | 2.5428 | 25.6574 | 9.2371 | 21.8576 | 24.9558 | 18.4249 | | 2.6369 | 1.53 | 30000 | 2.5042 | 25.5208 | 9.254 | 21.6673 | 24.8589 | 18.6467 | | 2.6 | 1.78 | 35000 | 2.4637 | 26.094 | 9.7003 | 22.3097 | 25.4695 | 18.5065 | | 2.5562 | 2.03 | 40000 | 2.4285 | 26.5374 | 9.9222 | 22.5291 | 25.8836 | 18.5553 | | 2.4322 | 2.29 | 45000 | 2.3858 | 26.939 | 10.3555 | 23.0211 | 26.2834 | 18.5614 | | 2.4106 | 2.54 | 50000 | 2.3537 | 26.7423 | 10.2816 | 22.7986 | 26.083 | 18.5792 | | 2.3731 | 2.8 | 55000 | 2.3163 | 27.1711 | 10.6296 | 23.206 | 26.4801 | 18.5433 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
FabsCool/autotrain-T5Base1_1-728922203
713a20a79f05a412221db9a629dae712f031d5cf
2022-04-11T10:31:58.000Z
[ "pytorch", "t5", "text2text-generation", "unk", "dataset:FabsCool/autotrain-data-T5Base1_1", "transformers", "autotrain", "co2_eq_emissions", "autotrain_compatible" ]
text2text-generation
false
FabsCool
null
FabsCool/autotrain-T5Base1_1-728922203
1
null
transformers
31,188
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - FabsCool/autotrain-data-T5Base1_1 co2_eq_emissions: 583.728921803621 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 728922203 - CO2 Emissions (in grams): 583.728921803621 ## Validation Metrics - Loss: 1.2922444343566895 - Rouge1: 54.3928 - Rouge2: 31.666 - RougeL: 50.3552 - RougeLsum: 50.3694 - Gen Len: 13.3425 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/FabsCool/autotrain-T5Base1_1-728922203 ```
Yingda/dummy-model
2fa5a10f8b17e971332163773a303ec56e3b9d2c
2022-04-11T07:10:47.000Z
[ "pytorch", "camembert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
Yingda
null
Yingda/dummy-model
1
null
transformers
31,189
Entry not found
benjaminbeilharz/baseline
d33d31d14b9e81a6cbb678647def0817251bb696
2022-04-11T08:23:16.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
benjaminbeilharz
null
benjaminbeilharz/baseline
1
null
transformers
31,190
Entry not found
Chikashi/t5-small-finetuned-wikihow_3epoch_b8_lr3e-4
3701d4cee6882a06af0d40a125d69b8d9360f82d
2022-04-11T17:20:49.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "dataset:wikihow", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
Chikashi
null
Chikashi/t5-small-finetuned-wikihow_3epoch_b8_lr3e-4
1
null
transformers
31,191
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikihow metrics: - rouge model-index: - name: t5-small-finetuned-wikihow_3epoch_b8_lr3e-4 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wikihow type: wikihow args: all metrics: - name: Rouge1 type: rouge value: 27.3718 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-wikihow_3epoch_b8_lr3e-4 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset. It achieves the following results on the evaluation set: - Loss: 2.3136 - Rouge1: 27.3718 - Rouge2: 10.6235 - Rougel: 23.3396 - Rougelsum: 26.6889 - Gen Len: 18.5194 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.8029 | 0.25 | 5000 | 2.5368 | 25.2267 | 8.9048 | 21.2588 | 24.5804 | 18.4303 | | 2.6924 | 0.51 | 10000 | 2.4725 | 25.6553 | 9.1904 | 21.7633 | 24.9807 | 18.5549 | | 2.6369 | 0.76 | 15000 | 2.4332 | 26.2895 | 9.7203 | 22.3286 | 25.6009 | 18.4185 | | 2.5994 | 1.02 | 20000 | 2.4051 | 26.1779 | 9.5708 | 22.3531 | 25.5357 | 18.561 | | 2.521 | 1.27 | 25000 | 2.3805 | 26.7558 | 10.0411 | 22.7252 | 26.0476 | 18.304 | | 2.5091 | 1.53 | 30000 | 2.3625 | 26.6439 | 10.0698 | 22.6662 | 25.9537 | 18.5437 | | 2.4941 | 1.78 | 35000 | 2.3498 | 26.9322 | 10.2817 | 23.0002 | 26.2604 | 18.4953 | | 2.4848 | 2.03 | 40000 | 2.3424 | 27.0381 | 10.3452 | 22.9749 | 26.3407 | 18.5749 | | 2.4268 | 2.29 | 45000 | 2.3272 | 27.2386 | 10.4595 | 23.1866 | 26.5541 | 18.4954 | | 2.4263 | 2.54 | 50000 | 2.3226 | 27.1489 | 10.532 | 23.1428 | 26.4657 | 18.5583 | | 2.4161 | 2.8 | 55000 | 2.3136 | 27.3718 | 10.6235 | 23.3396 | 26.6889 | 18.5194 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
Danastos/qacombination_bert_el
8c42a4474e69452a67f9dbf72b8bfc0ba0466be2
2022-04-11T17:55:52.000Z
[ "pytorch", "tensorboard", "bert", "question-answering", "dataset:Danastos/qacombination_el_custom", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
question-answering
false
Danastos
null
Danastos/qacombination_bert_el
1
null
transformers
31,192
--- tags: - generated_from_trainer datasets: - Danastos/qacombination_el_custom model-index: - name: qacombination_bert_el results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qacombination_bert_el This model is a fine-tuned version of [nlpaueb/bert-base-greek-uncased-v1](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1) on the Danastos/qacombination_el_custom dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: tpu - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
theojolliffe/opus-mt-en-ro-finetuned-en-to-ro
d4a8c3a177d57a80c48a0b41bb3e672d8f0448d6
2022-04-11T18:45:14.000Z
[ "pytorch", "tensorboard", "marian", "text2text-generation", "dataset:wmt16", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
theojolliffe
null
theojolliffe/opus-mt-en-ro-finetuned-en-to-ro
1
null
transformers
31,193
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: opus-mt-en-ro-finetuned-en-to-ro results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt16 type: wmt16 args: ro-en metrics: - name: Bleu type: bleu value: 27.9273 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-ro-finetuned-en-to-ro This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.2915 - Bleu: 27.9273 - Gen Len: 34.0935 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 0.7448 | 1.0 | 38145 | 1.2915 | 27.9273 | 34.0935 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
irenelizihui/MarianMT_UFAL_en_fr
ddcb6d1fa27a995d64d1e52d0b8726138a926d2f
2022-04-11T23:03:52.000Z
[ "pytorch", "marian", "text2text-generation", "transformers", "license:other", "autotrain_compatible" ]
text2text-generation
false
irenelizihui
null
irenelizihui/MarianMT_UFAL_en_fr
1
1
transformers
31,194
--- license: other --- UFAL English to French Machine Translation Model based on MarianMT model.
mT0/mt0_xl_default_mixture_ckpt_1012500
e54eda54cc373ffa15b83bf7e823b5b2ee2c9216
2022-04-11T19:43:52.000Z
[ "pytorch", "mt5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
mT0
null
mT0/mt0_xl_default_mixture_ckpt_1012500
1
null
transformers
31,195
Entry not found
tonyalves/ft-pt-br-local-2
c0fa039872c4eb110b7a5f1c1a3a2ef7ed18d69a
2022-04-11T20:57:03.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
tonyalves
null
tonyalves/ft-pt-br-local-2
1
null
transformers
31,196
--- license: apache-2.0 tags: - automatic-speech-recognition - generated_from_trainer model-index: - name: ft-pt-br-local-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ft-pt-br-local-2 This model is a fine-tuned version of [tonyalves/output](https://huggingface.co/tonyalves/output) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 100 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.9.1+cu102 - Datasets 1.18.4 - Tokenizers 0.11.6
BigSalmon/MediumInformalToFormalLincoln3
552bea2c447a6c03237c6503a4630bf6070359d4
2022-04-11T20:58:29.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
BigSalmon
null
BigSalmon/MediumInformalToFormalLincoln3
1
null
transformers
31,197
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/MediumInformalToFormalLincoln3") model = AutoModelForCausalLM.from_pretrained("BigSalmon/MediumInformalToFormalLincoln3") ``` ``` - moviepass to return - this summer - swooped up by - original co-founder stacy spikes text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes. *** - middle schools do not have recess - should get back to doing it - amazing for communication - and getting kids to move around text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity. *** - ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln35") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln35") ``` ``` - moviepass to return - this summer - swooped up by - original co-founder stacy spikes text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes. *** - middle schools do not have recess - should get back to doing it - amazing for communication - and getting kids to move around text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity. *** - ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` (makes two sentences, one sentence) (probably will not work all that well) ``` initial: phone books used to be everywhere. they have been replaced by the internet. combined: once ubiquitous, phone books have been supplanted by the internet. *** initial: ``` ``` what are the drawbacks of living near an airbnb? □ noise □ parking □ traffic □ security □ strangers *** ``` Keywords to sentences or sentence.
dapang/gpt2-medium
611e9cd93f135ff673a5d35d5fe121d4c2632cbf
2022-04-12T03:52:05.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
dapang
null
dapang/gpt2-medium
1
null
transformers
31,198
Entry not found
taile/xlm-roberta-large-finetuned-conll03-english
4d1f95e7eec1065bc04e53f1bd0b08ff6c291c56
2022-04-12T03:58:03.000Z
[ "pytorch", "rust", "xlm-roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
taile
null
taile/xlm-roberta-large-finetuned-conll03-english
1
null
transformers
31,199
Entry not found