modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
abhinav-kumar-thakur/distilbert-base-uncased-finetuned-mrpc
44523229fd50fb09c92c556a6ebaf3faa4b96654
2022-07-01T11:01:01.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
abhinav-kumar-thakur
null
abhinav-kumar-thakur/distilbert-base-uncased-finetuned-mrpc
5
null
transformers
17,500
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-mrpc results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8578431372549019 - name: F1 type: f1 value: 0.9006849315068494 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mrpc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5556 - Accuracy: 0.8578 - F1: 0.9007 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 230 | 0.3937 | 0.8113 | 0.8670 | | No log | 2.0 | 460 | 0.3660 | 0.8480 | 0.8967 | | 0.4387 | 3.0 | 690 | 0.4298 | 0.8529 | 0.8973 | | 0.4387 | 4.0 | 920 | 0.5573 | 0.8529 | 0.8990 | | 0.1832 | 5.0 | 1150 | 0.5556 | 0.8578 | 0.9007 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
mousaazari/t5-test2sql
fed74e13719275d4315b8a298133a0b6286bc771
2022-07-01T12:14:46.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
mousaazari
null
mousaazari/t5-test2sql
5
null
transformers
17,501
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-test2sql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-test2sql This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1207 - Rouge2 Precision: 0.9214 - Rouge2 Recall: 0.4259 - Rouge2 Fmeasure: 0.5578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:| | No log | 1.0 | 11 | 2.7293 | 0.1012 | 0.0305 | 0.0453 | | No log | 2.0 | 22 | 1.9009 | 0.0937 | 0.0292 | 0.0427 | | No log | 3.0 | 33 | 1.3525 | 0.1002 | 0.0349 | 0.0502 | | No log | 4.0 | 44 | 0.8837 | 0.1462 | 0.0529 | 0.0744 | | No log | 5.0 | 55 | 0.6460 | 0.5546 | 0.2531 | 0.3371 | | No log | 6.0 | 66 | 0.5050 | 0.729 | 0.3571 | 0.4631 | | No log | 7.0 | 77 | 0.4239 | 0.6944 | 0.3048 | 0.4088 | | No log | 8.0 | 88 | 0.3799 | 0.7868 | 0.3674 | 0.4807 | | No log | 9.0 | 99 | 0.3405 | 0.7266 | 0.3126 | 0.4213 | | No log | 10.0 | 110 | 0.3055 | 0.8447 | 0.3876 | 0.5104 | | No log | 11.0 | 121 | 0.2741 | 0.8546 | 0.3955 | 0.5201 | | No log | 12.0 | 132 | 0.2605 | 0.8676 | 0.4049 | 0.5308 | | No log | 13.0 | 143 | 0.2446 | 0.8424 | 0.3814 | 0.5047 | | No log | 14.0 | 154 | 0.2287 | 0.8659 | 0.3945 | 0.5238 | | No log | 15.0 | 165 | 0.2209 | 0.9064 | 0.4273 | 0.556 | | No log | 16.0 | 176 | 0.1990 | 0.888 | 0.409 | 0.5383 | | No log | 17.0 | 187 | 0.1941 | 0.9118 | 0.4305 | 0.5602 | | No log | 18.0 | 198 | 0.1785 | 0.9118 | 0.4305 | 0.5602 | | No log | 19.0 | 209 | 0.1669 | 0.919 | 0.4324 | 0.5636 | | No log | 20.0 | 220 | 0.1749 | 0.9138 | 0.4289 | 0.5608 | | No log | 21.0 | 231 | 0.1598 | 0.9047 | 0.4248 | 0.556 | | No log | 22.0 | 242 | 0.1501 | 0.9098 | 0.4294 | 0.5596 | | No log | 23.0 | 253 | 0.1456 | 0.9138 | 0.4307 | 0.5618 | | No log | 24.0 | 264 | 0.1419 | 0.893 | 0.4185 | 0.5467 | | No log | 25.0 | 275 | 0.1359 | 0.9005 | 0.4212 | 0.55 | | No log | 26.0 | 286 | 0.1338 | 0.8979 | 0.4212 | 0.5494 | | No log | 27.0 | 297 | 0.1319 | 0.9005 | 0.4212 | 0.55 | | No log | 28.0 | 308 | 0.1325 | 0.9005 | 0.4212 | 0.55 | | No log | 29.0 | 319 | 0.1335 | 0.9093 | 0.4231 | 0.5529 | | No log | 30.0 | 330 | 0.1240 | 0.9093 | 0.4231 | 0.5529 | | No log | 31.0 | 341 | 0.1222 | 0.9053 | 0.4231 | 0.5527 | | No log | 32.0 | 352 | 0.1265 | 0.9214 | 0.4259 | 0.5578 | | No log | 33.0 | 363 | 0.1286 | 0.9214 | 0.4259 | 0.5578 | | No log | 34.0 | 374 | 0.1283 | 0.9214 | 0.4259 | 0.5578 | | No log | 35.0 | 385 | 0.1279 | 0.9214 | 0.4259 | 0.5578 | | No log | 36.0 | 396 | 0.1285 | 0.9214 | 0.4259 | 0.5578 | | No log | 37.0 | 407 | 0.1291 | 0.9093 | 0.4231 | 0.5529 | | No log | 38.0 | 418 | 0.1270 | 0.9093 | 0.4231 | 0.5529 | | No log | 39.0 | 429 | 0.1225 | 0.9093 | 0.4231 | 0.5529 | | No log | 40.0 | 440 | 0.1205 | 0.9093 | 0.4231 | 0.5529 | | No log | 41.0 | 451 | 0.1210 | 0.9093 | 0.4231 | 0.5529 | | No log | 42.0 | 462 | 0.1230 | 0.9093 | 0.4231 | 0.5529 | | No log | 43.0 | 473 | 0.1250 | 0.9093 | 0.4231 | 0.5529 | | No log | 44.0 | 484 | 0.1223 | 0.9214 | 0.4259 | 0.5578 | | No log | 45.0 | 495 | 0.1226 | 0.9214 | 0.4259 | 0.5578 | | 0.5006 | 46.0 | 506 | 0.1213 | 0.9214 | 0.4259 | 0.5578 | | 0.5006 | 47.0 | 517 | 0.1205 | 0.9214 | 0.4259 | 0.5578 | | 0.5006 | 48.0 | 528 | 0.1203 | 0.9214 | 0.4259 | 0.5578 | | 0.5006 | 49.0 | 539 | 0.1206 | 0.9214 | 0.4259 | 0.5578 | | 0.5006 | 50.0 | 550 | 0.1207 | 0.9214 | 0.4259 | 0.5578 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
dminiotas05/distilbert-base-uncased-finetuned-ft500_4
7e7259f8aea9fad06cc63707b0997a8ccff3ccf8
2022-07-01T12:20:28.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
dminiotas05
null
dminiotas05/distilbert-base-uncased-finetuned-ft500_4
5
null
transformers
17,502
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-ft500_4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ft500_4 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1118 - Accuracy: 0.4807 - F1: 0.4638 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.1931 | 1.0 | 188 | 1.1525 | 0.4513 | 0.4333 | | 1.0982 | 2.0 | 376 | 1.1118 | 0.4807 | 0.4638 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
annS/roberta-base-prop-16-train-set
270012863afe003498416bc22bce9a437f857050
2022-07-01T18:39:43.000Z
[ "pytorch", "tensorboard", "roberta", "text-classification", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-classification
false
annS
null
annS/roberta-base-prop-16-train-set
5
null
transformers
17,503
--- license: mit tags: - generated_from_trainer model-index: - name: roberta-base-prop-16-train-set results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-prop-16-train-set This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
clevrly/xlnet-base-cased-finetuned-hotpot_qa
218644e4c5a4763b8689d1d15948ef09fb5b7a53
2022-07-01T19:47:44.000Z
[ "pytorch", "tensorboard", "xlnet", "question-answering", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
question-answering
false
clevrly
null
clevrly/xlnet-base-cased-finetuned-hotpot_qa
5
null
transformers
17,504
--- license: mit tags: - generated_from_trainer model-index: - name: xlnet-base-cased-finetuned-hotpot_qa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-base-cased-finetuned-hotpot_qa This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9574 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.027 | 1.0 | 923 | 1.0340 | | 0.8758 | 2.0 | 1846 | 0.9574 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Sayan01/tiny-bert-mnli-mm-distilled
f732dc7647df1a12b1000d99d3c423973c463664
2022-07-02T14:44:37.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Sayan01
null
Sayan01/tiny-bert-mnli-mm-distilled
5
null
transformers
17,505
Entry not found
ghadeermobasher/BioRed-Dis-Modified-PubMedBERT-512-5
b4f3ab8b665e5a157924b18ff9db4e6ba95a438a
2022-07-04T10:03:46.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Dis-Modified-PubMedBERT-512-5
5
null
transformers
17,506
Entry not found
ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-512-5
d59d91fe027b6eead72242629cb7ba32b1688aff
2022-07-04T10:03:54.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-512-5
5
null
transformers
17,507
Entry not found
ghadeermobasher/BioRed-Dis-Original-PubMedBERT-512-5
69ffb1e67ec2c1eda2555c24b65be70d7f72d0a7
2022-07-04T10:13:51.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Dis-Original-PubMedBERT-512-5
5
null
transformers
17,508
Entry not found
ghadeermobasher/BioRed-Chem-Original-PubMedBERT-512-5
911d46f039c13d24edc6112f7e1447ca13cf62e9
2022-07-04T10:15:44.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Chem-Original-PubMedBERT-512-5
5
null
transformers
17,509
Entry not found
ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-256-5
a8de116163b3185b342ae31ea08f916ec1d01cfe
2022-07-04T10:10:21.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-256-5
5
null
transformers
17,510
Entry not found
ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-256-13
156e568bcb1483ec5601096be68dadcc906377a8
2022-07-04T10:34:27.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-256-13
5
null
transformers
17,511
Entry not found
ghadeermobasher/BioRed-Chem-Original-PubMedBERT-256-5
4ffd7cb689b23e2da81671ac611debb11b187aaa
2022-07-04T10:27:21.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Chem-Original-PubMedBERT-256-5
5
null
transformers
17,512
Entry not found
ghadeermobasher/BioRed-Chem-Original-PubMedBERT-256-13
825a084193f7b331fc8f07f77816580fcec71fd6
2022-07-04T10:41:42.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Chem-Original-PubMedBERT-256-13
5
null
transformers
17,513
Entry not found
ghadeermobasher/BioRed-Chem-Original-PubMedBERT-384-8
8b374099692d9d87cecff6562556b21a31cf6d81
2022-07-04T11:26:19.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Chem-Original-PubMedBERT-384-8
5
null
transformers
17,514
Entry not found
ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-384-8
46111ed1e6d7df2b71995becf542e02cd0d53af2
2022-07-04T11:29:05.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-384-8
5
null
transformers
17,515
Entry not found
ghadeermobasher/BioRed-Chem-Original-PubMedBERT-384-5
f658cd47364ee34a2ce6e9ed22600d62132dde6f
2022-07-04T11:54:15.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Chem-Original-PubMedBERT-384-5
5
null
transformers
17,516
Entry not found
ghadeermobasher/BioRed-Dis-Original-PubMedBERT-384-5
79610d1fd8a13db1d4c12d305b6c4957e6ba29c2
2022-07-04T11:54:33.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Dis-Original-PubMedBERT-384-5
5
null
transformers
17,517
Entry not found
ghadeermobasher/BioRed-Dis-Modified-PubMedBERT-384-5
ea3dd0f14b4b86d80ac1927475bb2de03816a3ff
2022-07-04T11:55:17.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Dis-Modified-PubMedBERT-384-5
5
null
transformers
17,518
Entry not found
ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-384-5
136fb6d27098dd59b20eaa37941d7ccf90bb86ef
2022-07-04T11:55:17.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-384-5
5
null
transformers
17,519
Entry not found
kuttersn/dailydialog-distilgpt2
49300501f8c8c10454a85804deed2ff0e8aa6082
2022-07-04T11:38:10.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
kuttersn
null
kuttersn/dailydialog-distilgpt2
5
null
transformers
17,520
Entry not found
ghadeermobasher/BioRed-Dis-Original-PubMedBERT-320-8
854fd433cd51df830450aca1304e46f371fd2464
2022-07-04T13:17:30.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Dis-Original-PubMedBERT-320-8
5
null
transformers
17,521
Entry not found
ghadeermobasher/BioRed-Dis-Modified-PubMedBERT-128-32
a92adbb9bb6be99a0365319bcefd167f440ff765
2022-07-04T13:31:15.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Dis-Modified-PubMedBERT-128-32
5
null
transformers
17,522
Entry not found
ghadeermobasher/BioRed-Chem-Original-PubMedBERT-128-20
cfcb54b1ccfb4af89a1ef45ac140f08c539869b9
2022-07-04T14:46:31.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Chem-Original-PubMedBERT-128-20
5
null
transformers
17,523
Entry not found
ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-128-20
1839355dcc5737893963efcbef4717d0c6bb1626
2022-07-04T14:34:24.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-128-20
5
null
transformers
17,524
Entry not found
ghadeermobasher/BioRed-Chem-Original-PubMedBERT-128-5
39fb542b513712578ff6e8c3fc5576788ca28659
2022-07-04T14:35:17.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Chem-Original-PubMedBERT-128-5
5
null
transformers
17,525
Entry not found
ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-128-5
26627c4c458bd5d76ba87fcb2431d2da01b971e0
2022-07-04T14:35:34.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-128-5
5
null
transformers
17,526
Entry not found
ghadeermobasher/BioRed-Dis-Modified-PubMedBERT-320-8-10
d75a027b907ccdee29c6d9fcf1b25058b8d7f9bf
2022-07-04T16:52:07.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Dis-Modified-PubMedBERT-320-8-10
5
null
transformers
17,527
Entry not found
ghadeermobasher/BioRed-Dis-Modified-PubMedBERT-384-8-10
2528094af2e7c7b1927d8d90e53d2c2528fafb0b
2022-07-04T17:06:12.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Dis-Modified-PubMedBERT-384-8-10
5
null
transformers
17,528
Entry not found
ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-320-8-10
962ab2f5e095c87e8801a510dd94920fd6d3c664
2022-07-04T16:54:58.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioRed-Chem-Modified-PubMedBERT-320-8-10
5
null
transformers
17,529
Entry not found
romainlhardy/distilbart-cnn-12-6-booksum
a3c4f2eb62c93785dbe1b307e176cfb3989b11ae
2022-07-05T01:12:59.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
romainlhardy
null
romainlhardy/distilbart-cnn-12-6-booksum
5
null
transformers
17,530
Entry not found
Samlit/rare-puppers2
b4d284b77c99f14accb9179da0ad411b070fea78
2022-07-05T06:14:13.000Z
[ "pytorch", "tensorboard", "vit", "image-classification", "transformers", "huggingpics", "model-index" ]
image-classification
false
Samlit
null
Samlit/rare-puppers2
5
null
transformers
17,531
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: rare-puppers2 results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.6222222447395325 --- # rare-puppers2 Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### La Goulue Toulouse-Lautrec ![La Goulue Toulouse-Lautrec](images/La_Goulue_Toulouse-Lautrec.jpg) #### Marcelle Lender Bolero ![Marcelle Lender Bolero](images/Marcelle_Lender_Bolero.jpg) #### aristide bruant Lautrec ![aristide bruant Lautrec](images/aristide_bruant_Lautrec.jpg) #### la goulue Toulouse-Lautrec ![la goulue Toulouse-Lautrec](images/la_goulue_Toulouse-Lautrec.jpg)
slabschonoren/bert-encoding-finetuned-try1
c5a65ef523f0f7f8446666db5dd99931d19710ab
2022-07-05T12:49:32.000Z
[ "pytorch", "bert", "transformers" ]
null
false
slabschonoren
null
slabschonoren/bert-encoding-finetuned-try1
5
null
transformers
17,532
Entry not found
Aktsvigun/bart-base_xsum_4837
b705fb397d95571e12c0691d22d231d2a2ae1ecf
2022-07-07T14:36:06.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Aktsvigun
null
Aktsvigun/bart-base_xsum_4837
5
null
transformers
17,533
Entry not found
Eleven/xlm-roberta-base-finetuned-panx-all
0f8a61a27045a28522b66a8ba8ce9996a0773686
2022-07-05T17:33:02.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
Eleven
null
Eleven/xlm-roberta-base-finetuned-panx-all
5
null
transformers
17,534
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1752 - F1: 0.8557 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3 | 1.0 | 835 | 0.1862 | 0.8114 | | 0.1552 | 2.0 | 1670 | 0.1758 | 0.8426 | | 0.1002 | 3.0 | 2505 | 0.1752 | 0.8557 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
AnonymousSub/fpdm_roberta_pert_sent_0.01_squad2.0
e15b33c45304772ee997ce69e1c32394d2187792
2022-07-06T01:08:35.000Z
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
AnonymousSub
null
AnonymousSub/fpdm_roberta_pert_sent_0.01_squad2.0
5
null
transformers
17,535
Entry not found
nawta/wav2vec2-wtimit-finetune
6022b639fca77e5dcf4a1846232cd4435b98481b
2022-07-06T16:07:23.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
nawta
null
nawta/wav2vec2-wtimit-finetune
5
null
transformers
17,536
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-wtimit-finetune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-wtimit-finetune This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0383 - Wer: 0.0160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.3743 | 2.82 | 500 | 2.9567 | 1.0 | | 1.866 | 5.65 | 1000 | 0.2856 | 0.2580 | | 0.2005 | 8.47 | 1500 | 0.0979 | 0.0669 | | 0.08 | 11.3 | 2000 | 0.0617 | 0.0325 | | 0.0497 | 14.12 | 2500 | 0.0578 | 0.0284 | | 0.0348 | 16.95 | 3000 | 0.0557 | 0.0239 | | 0.0269 | 19.77 | 3500 | 0.0447 | 0.0212 | | 0.0198 | 22.6 | 4000 | 0.0437 | 0.0177 | | 0.016 | 25.42 | 4500 | 0.0407 | 0.0164 | | 0.014 | 28.25 | 5000 | 0.0383 | 0.0160 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
lepowl01/dummy-model
3cfc3de60e760a66edd90b927b8abe346e3affa7
2022-07-06T13:31:09.000Z
[ "pytorch", "camembert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
lepowl01
null
lepowl01/dummy-model
5
null
transformers
17,537
Entry not found
Evelyn18/distilbert-base-uncased-becasv2-3
329f1cdaa661c5f75acb0270e8be3bd88630bf6a
2022-07-07T04:00:45.000Z
[ "pytorch", "tensorboard", "distilbert", "question-answering", "dataset:becasv2", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
Evelyn18
null
Evelyn18/distilbert-base-uncased-becasv2-3
5
null
transformers
17,538
--- license: apache-2.0 tags: - generated_from_trainer datasets: - becasv2 model-index: - name: distilbert-base-uncased-becasv2-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-becasv2-3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv2 dataset. It achieves the following results on the evaluation set: - Loss: 3.1218 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 9 | 4.6377 | | No log | 2.0 | 18 | 3.8511 | | No log | 3.0 | 27 | 3.3758 | | No log | 4.0 | 36 | 3.1910 | | No log | 5.0 | 45 | 3.1187 | | No log | 6.0 | 54 | 3.1009 | | No log | 7.0 | 63 | 3.1131 | | No log | 8.0 | 72 | 3.1218 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Evelyn18/distilbert-base-uncased-becasv2-4
f089fe1e99ca89f7782340c16eb4d45574ee50c9
2022-07-07T04:16:06.000Z
[ "pytorch", "tensorboard", "distilbert", "question-answering", "dataset:becasv2", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
Evelyn18
null
Evelyn18/distilbert-base-uncased-becasv2-4
5
null
transformers
17,539
--- license: apache-2.0 tags: - generated_from_trainer datasets: - becasv2 model-index: - name: distilbert-base-uncased-becasv2-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-becasv2-4 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv2 dataset. It achieves the following results on the evaluation set: - Loss: 3.4637 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 6 | 5.3677 | | No log | 2.0 | 12 | 4.6741 | | No log | 3.0 | 18 | 4.2978 | | No log | 4.0 | 24 | 3.9963 | | No log | 5.0 | 30 | 3.7544 | | No log | 6.0 | 36 | 3.5810 | | No log | 7.0 | 42 | 3.4932 | | No log | 8.0 | 48 | 3.4637 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Evelyn18/distilbert-base-uncased-becasv2-6
d39a721ab67453b8c6bf1f229534d5fec1fce4aa
2022-07-07T04:44:16.000Z
[ "pytorch", "tensorboard", "distilbert", "question-answering", "dataset:becasv2", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
Evelyn18
null
Evelyn18/distilbert-base-uncased-becasv2-6
5
null
transformers
17,540
--- license: apache-2.0 tags: - generated_from_trainer datasets: - becasv2 model-index: - name: distilbert-base-uncased-becasv2-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-becasv2-6 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv2 dataset. It achieves the following results on the evaluation set: - Loss: 3.8936 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 9 | 4.0542 | | No log | 2.0 | 18 | 3.0865 | | No log | 3.0 | 27 | 2.8069 | | No log | 4.0 | 36 | 3.3330 | | No log | 5.0 | 45 | 3.4108 | | No log | 6.0 | 54 | 3.5562 | | No log | 7.0 | 63 | 3.8846 | | No log | 8.0 | 72 | 3.8936 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
ScarlettSun9/autotrain-ZuoZhuan-1100540141
d963448817fd8ae4baa5bc14d3b1f2e05e283312
2022-07-07T07:08:04.000Z
[ "pytorch", "roberta", "token-classification", "unk", "dataset:ScarlettSun9/autotrain-data-ZuoZhuan", "transformers", "autotrain", "co2_eq_emissions", "autotrain_compatible" ]
token-classification
false
ScarlettSun9
null
ScarlettSun9/autotrain-ZuoZhuan-1100540141
5
null
transformers
17,541
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - ScarlettSun9/autotrain-data-ZuoZhuan co2_eq_emissions: 8.343592303925112 --- # Model Trained Using AutoTrain - Problem type: Entity Extraction - Model ID: 1100540141 - CO2 Emissions (in grams): 8.343592303925112 ## Validation Metrics - Loss: 0.38094884157180786 - Accuracy: 0.8795777325860159 - Precision: 0.8171375141922127 - Recall: 0.8417033571821684 - F1: 0.8292385373953709 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ScarlettSun9/autotrain-ZuoZhuan-1100540141 ``` Or Python API: ``` from transformers import AutoModelForTokenClassification, AutoTokenizer model = AutoModelForTokenClassification.from_pretrained("ScarlettSun9/autotrain-ZuoZhuan-1100540141", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("ScarlettSun9/autotrain-ZuoZhuan-1100540141", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
tanapatentlm/patentdeberta_large_spec_128_pwi
83d060b1258d6e2ffc696ed0d48b5c3c66c99651
2022-07-13T22:13:56.000Z
[ "pytorch", "tensorboard", "deberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
tanapatentlm
null
tanapatentlm/patentdeberta_large_spec_128_pwi
5
null
transformers
17,542
Entry not found
huggingtweets/mcconaughey
cbc3263f2edb6bc22194784941ddb827a36cb0f0
2022-07-07T19:10:58.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/mcconaughey
5
null
transformers
17,543
--- language: en thumbnail: http://www.huggingtweets.com/mcconaughey/1657221054082/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1191381171164237824/jdS95Rtm_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Matthew McConaughey</div> <div style="text-align: center; font-size: 14px;">@mcconaughey</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Matthew McConaughey. | Data | Matthew McConaughey | | --- | --- | | Tweets downloaded | 2519 | | Retweets | 595 | | Short tweets | 264 | | Tweets kept | 1660 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3cksy9wk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mcconaughey's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3hgi91kg) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3hgi91kg/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/mcconaughey') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
sdotmac/SimeBot
5615f0c350dc318a49242d68042f00b49a9c60e6
2022-07-08T05:38:42.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "license:osl-3.0" ]
text-generation
false
sdotmac
null
sdotmac/SimeBot
5
null
transformers
17,544
--- license: osl-3.0 ---
swtx/ernie-gram-chinese
cd16040bb41feee1999da8c5302ea38934cc0589
2022-07-08T09:44:33.000Z
[ "pytorch", "bert", "feature-extraction", "chinese", "arxiv:2010.12148", "transformers" ]
feature-extraction
false
swtx
null
swtx/ernie-gram-chinese
5
null
transformers
17,545
--- language: chinese --- # ERNIE-Gram-chinese ## Introduction ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling for Natural Language Understanding More detail: https://arxiv.org/abs/2010.12148 ## Released Model Info |Model Name|Language|Model Structure| |:---:|:---:|:---:| |ernie-gram-chinese| Chinese |Layer:12, Hidden:768, Heads:12| This released Pytorch model is converted from the officially released PaddlePaddle ERNIE model and a series of experiments have been conducted to check the accuracy of the conversion. - Official PaddlePaddle ERNIE repo: https://github.com/PaddlePaddle/ERNIE - Pytorch Conversion repo: https://github.com/nghuyong/ERNIE-Pytorch ## How to use ```Python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("swtx/ernie-gram-chinese") model = AutoModel.from_pretrained("swtx/ernie-gram-chinese") ```
jonatasgrosman/exp_w2v2t_th_wav2vec2_s664
d4c920202f4fcdc7ceb4e3fc4a6ffc1d874c2ac8
2022-07-08T10:06:53.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "th", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_th_wav2vec2_s664
5
null
transformers
17,546
--- language: - th license: apache-2.0 tags: - automatic-speech-recognition - th datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_th_wav2vec2_s664 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_th_unispeech-sat_s772
74a2ebf6f0d64a735d6be2947fe2b3cd83d8535e
2022-07-08T15:04:41.000Z
[ "pytorch", "unispeech-sat", "automatic-speech-recognition", "th", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_th_unispeech-sat_s772
5
null
transformers
17,547
--- language: - th license: apache-2.0 tags: - automatic-speech-recognition - th datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_th_unispeech-sat_s772 Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (th)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
sl82/swin-tiny-patch4-window7-224-finetuned-eurosat
38e0cafd26c34d8f8c6b67a7cb60c76f34917a69
2022-07-09T03:36:40.000Z
[ "pytorch", "tensorboard", "swin", "image-classification", "dataset:imagefolder", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
image-classification
false
sl82
null
sl82/swin-tiny-patch4-window7-224-finetuned-eurosat
5
null
transformers
17,548
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder args: default metrics: - name: Accuracy type: accuracy value: 0.9837037037037037 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0581 - Accuracy: 0.9837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2666 | 1.0 | 190 | 0.1364 | 0.9541 | | 0.1735 | 2.0 | 380 | 0.0970 | 0.9663 | | 0.126 | 3.0 | 570 | 0.0581 | 0.9837 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Aktsvigun/bart-base_xsum_6585777
8d9c2ae8a034ffba451825366861a767056c3d0e
2022-07-10T10:21:52.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Aktsvigun
null
Aktsvigun/bart-base_xsum_6585777
5
null
transformers
17,549
Entry not found
jonatasgrosman/exp_w2v2t_it_wavlm_s662
1c525a018f3d06fffa97298c6a1adfe85c4290ff
2022-07-08T20:06:11.000Z
[ "pytorch", "wavlm", "automatic-speech-recognition", "it", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_it_wavlm_s662
5
null
transformers
17,550
--- language: - it license: apache-2.0 tags: - automatic-speech-recognition - it datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_it_wavlm_s662 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_fr_wav2vec2_s227
6b3071c2d439da665487cee0de4f200e39fe4eea
2022-07-08T22:58:37.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_fr_wav2vec2_s227
5
null
transformers
17,551
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - fr datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fr_wav2vec2_s227 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_fr_wav2vec2_s809
0823307e7a2e227e6ae821fc1c63a1ab80146617
2022-07-08T23:04:08.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_fr_wav2vec2_s809
5
null
transformers
17,552
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - fr datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fr_wav2vec2_s809 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_fr_wav2vec2_s870
f1d67dab853edc1b94c6f56479e8b74a831fe010
2022-07-08T23:07:27.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_fr_wav2vec2_s870
5
null
transformers
17,553
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - fr datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fr_wav2vec2_s870 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_fr_vp-100k_s688
10465c71251c7c909dc27c378dc8426b45581dea
2022-07-08T23:12:06.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_fr_vp-100k_s688
5
null
transformers
17,554
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - fr datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fr_vp-100k_s688 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_fr_vp-100k_s509
83cc60f0f3e2a239ec702335e5dc7e3251718f50
2022-07-08T23:17:07.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_fr_vp-100k_s509
5
null
transformers
17,555
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - fr datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fr_vp-100k_s509 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_fr_vp-100k_s973
3f808e7330156f96d462488d3808abf479f5e6a8
2022-07-08T23:21:17.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_fr_vp-100k_s973
5
null
transformers
17,556
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - fr datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fr_vp-100k_s973 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_fr_xlsr-53_s286
081e0585eb1d04cf3d5d9dd10052aaa99ae45f91
2022-07-08T23:25:06.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_fr_xlsr-53_s286
5
null
transformers
17,557
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - fr datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fr_xlsr-53_s286 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_fr_xlsr-53_s800
5c802a890571d383a19d3e1bd4a3f0d9850ad6bd
2022-07-08T23:28:33.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_fr_xlsr-53_s800
5
null
transformers
17,558
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - fr datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fr_xlsr-53_s800 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_fr_xlsr-53_s539
92147e9e845822ef1f664e521a0c6ee3096e4594
2022-07-08T23:32:25.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_fr_xlsr-53_s539
5
null
transformers
17,559
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - fr datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fr_xlsr-53_s539 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_fr_vp-sv_s875
a37b6c45536ecf1920d53e80fde8309faa32e5c8
2022-07-09T00:01:55.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_fr_vp-sv_s875
5
null
transformers
17,560
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - fr datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fr_vp-sv_s875 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_fr_vp-sv_s596
814d060f8a89a5e8c127fcaabf2df136eacc9bd7
2022-07-09T00:05:16.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_fr_vp-sv_s596
5
null
transformers
17,561
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - fr datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fr_vp-sv_s596 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_fr_vp-sv_s877
58dbbffad038ceb9b32bf12d7dce4092a190e9ee
2022-07-09T00:08:48.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_fr_vp-sv_s877
5
null
transformers
17,562
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - fr datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fr_vp-sv_s877 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_fr_no-pretraining_s766
3fc36e65ba875b669cd00227736dcac110628291
2022-07-09T00:12:36.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_fr_no-pretraining_s766
5
null
transformers
17,563
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - fr datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fr_no-pretraining_s766 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_fr_no-pretraining_s929
cb21c029b6ef52d99e9b22690391d1417252a247
2022-07-09T00:17:54.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_fr_no-pretraining_s929
5
null
transformers
17,564
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - fr datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fr_no-pretraining_s929 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_fr_no-pretraining_s208
b4c50730e4746dca49c6e18a99da1a115176a3db
2022-07-09T00:24:47.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_fr_no-pretraining_s208
5
null
transformers
17,565
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - fr datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fr_no-pretraining_s208 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
Aktsvigun/bart-base_xsum_919213
82b76739fc20eb443639dc6e426ca1ad94d37162
2022-07-10T10:19:12.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Aktsvigun
null
Aktsvigun/bart-base_xsum_919213
5
null
transformers
17,566
Entry not found
Aktsvigun/bart-base_xsum_5537116
2d71515663db5732ff4f680e4501410d316846bc
2022-07-10T10:16:19.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Aktsvigun
null
Aktsvigun/bart-base_xsum_5537116
5
null
transformers
17,567
Entry not found
dingusagar/vit-base-movie-scenes-v1
28c56d8a2f6bad19e6b534fbd97268ccfc0b3f69
2022-07-09T14:34:10.000Z
[ "pytorch", "tensorboard", "vit", "image-classification", "dataset:imagefolder", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
image-classification
false
dingusagar
null
dingusagar/vit-base-movie-scenes-v1
5
null
transformers
17,568
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder model-index: - name: vit-base-movie-scenes-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-movie-scenes-v1 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. Fine-tuned on movie scene images from batman and harry potter. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
huggingtweets/bro_b619
05d2f9003ff3958a96b3958b5aa464683d871c44
2022-07-09T15:47:23.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/bro_b619
5
null
transformers
17,569
--- language: en thumbnail: http://www.huggingtweets.com/bro_b619/1657381637888/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1475310547805425664/2vnSS9WL_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Brutha B 🧀🌐</div> <div style="text-align: center; font-size: 14px;">@bro_b619</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Brutha B 🧀🌐. | Data | Brutha B 🧀🌐 | | --- | --- | | Tweets downloaded | 1922 | | Retweets | 302 | | Short tweets | 345 | | Tweets kept | 1275 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2lb73vwt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bro_b619's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/xm49vj8a) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/xm49vj8a/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/bro_b619') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huangjia/xlm-roberta-base-finetuned-panx-en
750cfa06e750c7c41ab189bc425baed45e616137
2022-07-09T16:12:45.000Z
[ "pytorch", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
huangjia
null
huangjia/xlm-roberta-base-finetuned-panx-en
5
null
transformers
17,570
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.en metrics: - name: F1 type: f1 value: 0.618063112078346 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.4603 - F1: 0.6181 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 25 | 0.8577 | 0.3917 | | 1.0821 | 2.0 | 50 | 0.5391 | 0.5466 | | 1.0821 | 3.0 | 75 | 0.4603 | 0.6181 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.2 - Datasets 1.18.4 - Tokenizers 0.10.3
jonatasgrosman/exp_w2v2t_fa_unispeech_s364
50009b1bf2be05e0031f937524b24c55067e4500
2022-07-09T20:26:09.000Z
[ "pytorch", "unispeech", "automatic-speech-recognition", "fa", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_fa_unispeech_s364
5
null
transformers
17,571
--- language: - fa license: apache-2.0 tags: - automatic-speech-recognition - fa datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fa_unispeech_s364 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_fa_hubert_s801
33148b3a6ea02f29f2ed0a5350ca96837735b30b
2022-07-09T20:29:40.000Z
[ "pytorch", "hubert", "automatic-speech-recognition", "fa", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_fa_hubert_s801
5
null
transformers
17,572
--- language: - fa license: apache-2.0 tags: - automatic-speech-recognition - fa datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fa_hubert_s801 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_fa_vp-it_s18
ec5c69992c5a150d46c6aae1edb8c0e959091e3d
2022-07-09T23:59:58.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fa", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_fa_vp-it_s18
5
null
transformers
17,573
--- language: - fa license: apache-2.0 tags: - automatic-speech-recognition - fa datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_fa_vp-it_s18 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
Cleyden/roberta-base-prop-16-train-set
6e2bb4e6a6e5e955eceea76a05d8f469139833bf
2022-07-10T03:20:39.000Z
[ "pytorch", "tensorboard", "roberta", "text-classification", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-classification
false
Cleyden
null
Cleyden/roberta-base-prop-16-train-set
5
null
transformers
17,574
--- license: mit tags: - generated_from_trainer model-index: - name: roberta-base-prop-16-train-set results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-prop-16-train-set This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
jonatasgrosman/exp_w2v2t_uk_vp-es_s692
f6ca6674d776812f241349fb69978a0d3857d1c2
2022-07-10T14:38:57.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "uk", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_uk_vp-es_s692
5
null
transformers
17,575
--- language: - uk license: apache-2.0 tags: - automatic-speech-recognition - uk datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_uk_vp-es_s692 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (uk)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
nestoralvaro/distilbert-base-uncased-finetuned-ner
0c572a2485e67945381c34c0e436ebfbc5d7690a
2022-07-10T21:28:55.000Z
[ "pytorch", "tensorboard", "distilbert", "token-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
nestoralvaro
null
nestoralvaro/distilbert-base-uncased-finetuned-ner
5
null
transformers
17,576
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4253 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.9226 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:| | No log | 1.0 | 15 | 0.4677 | 0.0 | 0.0 | 0.0 | 0.9226 | | No log | 2.0 | 30 | 0.4303 | 0.0 | 0.0 | 0.0 | 0.9226 | | No log | 3.0 | 45 | 0.4253 | 0.0 | 0.0 | 0.0 | 0.9226 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
jonatasgrosman/exp_w2v2t_pl_unispeech_s622
e01783bc2efccdf21bf0ad36515e8cf6ec23d03a
2022-07-10T18:50:03.000Z
[ "pytorch", "unispeech", "automatic-speech-recognition", "pl", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_pl_unispeech_s622
5
null
transformers
17,577
--- language: - pl license: apache-2.0 tags: - automatic-speech-recognition - pl datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pl_unispeech_s622 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
tner/bert-large-tweetner-2020-2021-continuous
a9d36a5143df0f800a8ba0744b2081c86b57e3e8
2022-07-12T09:28:50.000Z
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
tner
null
tner/bert-large-tweetner-2020-2021-continuous
5
null
transformers
17,578
Entry not found
jonatasgrosman/exp_w2v2t_es_unispeech_s767
82fc37726d7d5c9fa905f324570ac647ce07d2f1
2022-07-11T10:46:34.000Z
[ "pytorch", "unispeech", "automatic-speech-recognition", "es", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_es_unispeech_s767
5
null
transformers
17,579
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - es datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_es_unispeech_s767 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
tner/twitter-roberta-base-2019-90m-tweetner-random
7a5278e5b0f102bd94e85562b3c438b4e448b7d8
2022-07-11T11:21:09.000Z
[ "pytorch", "roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
tner
null
tner/twitter-roberta-base-2019-90m-tweetner-random
5
null
transformers
17,580
Entry not found
tner/bertweet-base-tweetner-random
3cc6ff7d4c8815b3eadc397caea571989ea1fe66
2022-07-11T16:05:38.000Z
[ "pytorch", "roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
tner
null
tner/bertweet-base-tweetner-random
5
null
transformers
17,581
Entry not found
ManqingLiu/pegasus-samsum
3068c04c734baa02d57d76874811e9b5e4667e2b
2022-07-11T22:33:51.000Z
[ "pytorch", "tensorboard", "pegasus", "text2text-generation", "dataset:samsum", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
text2text-generation
false
ManqingLiu
null
ManqingLiu/pegasus-samsum
5
null
transformers
17,582
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4858 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7236 | 0.54 | 500 | 1.4858 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.10.3
Evelyn18/legalectra-small-spanish-becasv3-1
8d1e390c0cd0deff6b641034d4337e352bbbadad
2022-07-12T03:54:49.000Z
[ "pytorch", "tensorboard", "electra", "question-answering", "dataset:becasv2", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
question-answering
false
Evelyn18
null
Evelyn18/legalectra-small-spanish-becasv3-1
5
null
transformers
17,583
--- tags: - generated_from_trainer datasets: - becasv2 model-index: - name: legalectra-small-spanish-becasv3-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # legalectra-small-spanish-becasv3-1 This model is a fine-tuned version of [mrm8488/legalectra-small-spanish](https://huggingface.co/mrm8488/legalectra-small-spanish) on the becasv2 dataset. It achieves the following results on the evaluation set: - Loss: 5.5694 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 8 | 5.8980 | | No log | 2.0 | 16 | 5.8136 | | No log | 3.0 | 24 | 5.7452 | | No log | 4.0 | 32 | 5.6940 | | No log | 5.0 | 40 | 5.6554 | | No log | 6.0 | 48 | 5.6241 | | No log | 7.0 | 56 | 5.5997 | | No log | 8.0 | 64 | 5.5830 | | No log | 9.0 | 72 | 5.5730 | | No log | 10.0 | 80 | 5.5694 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
ghadeermobasher/Modified-BiomedNLP-PubMedBERT-base-uncased-abstract-BioRED-Dis-512-5-30
e8041f58b37dc2b81490aad0dfca1ed5ec46d862
2022-07-12T11:29:53.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/Modified-BiomedNLP-PubMedBERT-base-uncased-abstract-BioRED-Dis-512-5-30
5
null
transformers
17,584
Entry not found
andreaschandra/xlm-roberta-base-finetuned-panx-de
c45a0bdca885932d5d37fb1fd3d7a5125706a668
2022-07-12T13:52:44.000Z
[ "pytorch", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
andreaschandra
null
andreaschandra/xlm-roberta-base-finetuned-panx-de
5
null
transformers
17,585
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8620945214069894 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1372 - F1: 0.8621 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 | | 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 | | 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
andreaschandra/xlm-roberta-base-finetuned-panx-it
408f78625ea702492939d5100146042210b5bca2
2022-07-12T15:34:53.000Z
[ "pytorch", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
andreaschandra
null
andreaschandra/xlm-roberta-base-finetuned-panx-it
5
null
transformers
17,586
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.it metrics: - name: F1 type: f1 value: 0.8288879770209273 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2380 - F1: 0.8289 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7058 | 1.0 | 70 | 0.3183 | 0.7480 | | 0.2808 | 2.0 | 140 | 0.2647 | 0.8070 | | 0.1865 | 3.0 | 210 | 0.2380 | 0.8289 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
ilmariky/bert-base-finnish-cased-squad1-fi
997f36aaa049ea452aa8c87b7873ddd01e059c00
2022-07-12T19:09:57.000Z
[ "pytorch", "bert", "question-answering", "fi", "dataset:SQuAD_v2_fi + Finnish partition of TyDi-QA", "transformers", "license:gpl-3.0", "autotrain_compatible" ]
question-answering
false
ilmariky
null
ilmariky/bert-base-finnish-cased-squad1-fi
5
null
transformers
17,587
--- language: fi datasets: - SQuAD_v2_fi + Finnish partition of TyDi-QA license: gpl-3.0 --- # bert-base-finnish-cased-v1 for QA This is the [bert-base-finnish-cased-v1](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model, fine-tuned using an automatically translated [Finnish version of the SQuAD2.0 dataset](https://huggingface.co/datasets/ilmariky/SQuAD_v2_fi) in combination with the Finnish partition of the [TyDi-QA](https://github.com/google-research-datasets/tydiqa) dataset. It's been trained on question-answer pairs, **excluding unanswerable questions**, for the task of question answering. Another QA model that has been fine-tuned with also unanswerable questions is also available: [bert-base-finnish-cased-squad2-fi](https://huggingface.co/ilmariky/bert-base-finnish-cased-squad1-fi). ## Overview **Language model:** bert-base-finnish-cased-v1 **Language:** Finnish **Downstream-task:** Extractive QA **Training data:** Answerable questions from [Finnish SQuAD 2.0](https://huggingface.co/datasets/ilmariky/SQuAD_v2_fi) + Finnish partition of TyDi-QA **Eval data:** Answerable questions from [Finnish SQuAD 2.0](https://huggingface.co/datasets/ilmariky/SQuAD_v2_fi) + Finnish partition of TyDi-QA ## Usage ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "ilmariky/bert-base-finnish-cased-squad1-fi" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Mikä tämä on?', 'context': 'Tämä on testi.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Performance Evaluated with a slightly modified version of the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/). ``` { "exact": 58.00497718788884, "f1": 69.90891092523077, "total": 4822, "HasAns_exact": 58.00497718788884, "HasAns_f1": 69.90891092523077, "HasAns_total": 4822 } ```
huggingtweets/majigglydoobers
060a3d7270bffc8fb0b1e24188bab03c9b7eef8e
2022-07-13T02:58:05.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/majigglydoobers
5
null
transformers
17,588
--- language: en thumbnail: http://www.huggingtweets.com/majigglydoobers/1657681081092/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1542204712455241729/6E7rxSrt_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">doobers 👻❤️‍🩹</div> <div style="text-align: center; font-size: 14px;">@majigglydoobers</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from doobers 👻❤️‍🩹. | Data | doobers 👻❤️‍🩹 | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 2046 | | Short tweets | 199 | | Tweets kept | 1004 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/36h6xok5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @majigglydoobers's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/emkivtny) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/emkivtny/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/majigglydoobers') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ahadda5/bart_wikikp_kp20k
6a5b2cbf01e6a28fa204307c991cb461bdcf1a01
2022-07-13T12:30:37.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
ahadda5
null
ahadda5/bart_wikikp_kp20k
5
null
transformers
17,589
bart trained on wikikp then midas/kp20k
jordyvl/udpos28-sm-first-POS
5fedbcc0468feae273198688406116251642eb1d
2022-07-13T12:53:00.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "dataset:udpos28", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
jordyvl
null
jordyvl/udpos28-sm-first-POS
5
null
transformers
17,590
--- license: apache-2.0 tags: - generated_from_trainer datasets: - udpos28 metrics: - precision - recall - f1 - accuracy model-index: - name: udpos28-sm-first-POS results: - task: name: Token Classification type: token-classification dataset: name: udpos28 type: udpos28 args: en metrics: - name: Precision type: precision value: 0.9511089206505667 - name: Recall type: recall value: 0.9546093116207286 - name: F1 type: f1 value: 0.9528559014062253 - name: Accuracy type: accuracy value: 0.9559133601686793 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # udpos28-sm-first-POS This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the udpos28 dataset. It achieves the following results on the evaluation set: - Loss: 0.1896 - Precision: 0.9511 - Recall: 0.9546 - F1: 0.9529 - Accuracy: 0.9559 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1696 | 1.0 | 4978 | 0.1700 | 0.9440 | 0.9464 | 0.9452 | 0.9472 | | 0.0973 | 2.0 | 9956 | 0.1705 | 0.9487 | 0.9533 | 0.9510 | 0.9543 | | 0.0508 | 3.0 | 14934 | 0.1896 | 0.9511 | 0.9546 | 0.9529 | 0.9559 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.2.2 - Tokenizers 0.12.1
RJ3vans/ElectraCCVspanTagger
cd5563d42e2c7cee72413a446f6fd12ca47ed8ce
2022-07-13T16:11:08.000Z
[ "pytorch", "electra", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
RJ3vans
null
RJ3vans/ElectraCCVspanTagger
5
null
transformers
17,591
Entry not found
kuttersn/test-clm
0ffb306943a919c706f0e491aeb0fd8e710b42f8
2022-07-15T02:04:32.000Z
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-generation
false
kuttersn
null
kuttersn/test-clm
5
null
transformers
17,592
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: test-clm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-clm This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.5311 - Accuracy: 0.3946 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
ghadeermobasher/Modifiedbiobert-v1.1-BioRED-CD-128-32-30
7dbee0b8a1f295283a18c92e2657bbd65526aca6
2022-07-13T17:48:37.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/Modifiedbiobert-v1.1-BioRED-CD-128-32-30
5
null
transformers
17,593
--- tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: Modifiedbiobert-v1.1-BioRED-CD-128-32-30 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Modifiedbiobert-v1.1-BioRED-CD-128-32-30 This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Precision: 1.0 - Recall: 1.0 - F1: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30.0 ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.10.3
ghadeermobasher/Modifiedbiobert-v1.1-BioRED-CD-256-16-5
bd37b8ebc9b2b0f809d6b706ff5db398c339a60a
2022-07-13T19:49:11.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/Modifiedbiobert-v1.1-BioRED-CD-256-16-5
5
null
transformers
17,594
Entry not found
Evelyn18/distilbert-base-uncased-prueba2
89428ebb280adac9ac0af0458972ba6d63945449
2022-07-13T21:14:13.000Z
[ "pytorch", "tensorboard", "distilbert", "question-answering", "dataset:becasv2", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
Evelyn18
null
Evelyn18/distilbert-base-uncased-prueba2
5
null
transformers
17,595
--- license: apache-2.0 tags: - generated_from_trainer datasets: - becasv2 model-index: - name: distilbert-base-uncased-prueba2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-prueba2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv2 dataset. It achieves the following results on the evaluation set: - Loss: 3.6356 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 9 | 3.9054 | | No log | 2.0 | 18 | 3.1893 | | No log | 3.0 | 27 | 2.9748 | | No log | 4.0 | 36 | 3.1541 | | No log | 5.0 | 45 | 3.2887 | | No log | 6.0 | 54 | 3.5055 | | No log | 7.0 | 63 | 3.5902 | | No log | 8.0 | 72 | 3.6356 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
RJ3vans/DeBERTaSSCCVspanTagger
9d847ea9c5ec97f9366ca5a2e40f0957f24febb3
2022-07-14T15:23:22.000Z
[ "pytorch", "deberta-v2", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
RJ3vans
null
RJ3vans/DeBERTaSSCCVspanTagger
5
null
transformers
17,596
Entry not found
RJ3vans/DeBERTaCCVspanTagger
7a0de436ae510f3894f913575d3dc8fd4ab141eb
2022-07-14T16:31:09.000Z
[ "pytorch", "deberta-v2", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
RJ3vans
null
RJ3vans/DeBERTaCCVspanTagger
5
null
transformers
17,597
Entry not found
Sayan01/tiny-bert-qnli-128-distilled
8488aca135ad04a50958e90b84bdce3868ef2414
2022-07-15T04:08:54.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Sayan01
null
Sayan01/tiny-bert-qnli-128-distilled
5
null
transformers
17,598
Entry not found
CennetOguz/bert-large-uncased-finetuned-youcook_2
7bc559a4132a473efbbda939e2c2c34cf2c5ad20
2022-07-15T00:16:54.000Z
[ "pytorch", "tensorboard", "bert", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
CennetOguz
null
CennetOguz/bert-large-uncased-finetuned-youcook_2
5
null
transformers
17,599
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-large-uncased-finetuned-youcook_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-finetuned-youcook_2 This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9929 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3915 | 1.0 | 206 | 2.1036 | | 2.0412 | 2.0 | 412 | 2.2207 | | 1.9062 | 3.0 | 618 | 1.7281 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0a0+17540c5 - Datasets 2.3.2 - Tokenizers 0.12.1