modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
taskydata/DeBERTa-v3-128
3402f20625a53e37e841464f36c4d2518c801486
2022-05-30T08:40:05.000Z
[ "pytorch", "deberta-v2", "text-classification", "transformers", "license:apache-2.0" ]
text-classification
false
taskydata
null
taskydata/DeBERTa-v3-128
5
null
transformers
17,200
--- license: apache-2.0 --- **Hyperparameters:** - learning rate: 2e-5 - weight decay: 0.01 - per_device_train_batch_size: 16 - per_device_eval_batch_size: 16 - gradient_accumulation_steps:1 - eval steps: 5000 - max_length: 128 - num_epochs: 3 **Dataset version:** - “craffel/tasky_or_not”, “10xp3_10xc4”, “15f88c8” **Checkpoint:** - 10000 steps **Results on Validation set:** | Step | Training Loss | Validation Loss | Accuracy | Precision | Recall | F1 | |-------|---------------|-----------------|----------|-----------|----------|----------| | 5000 | 0.036400 | 0.266518 | 0.926913 | 0.999662 | 0.916934 | 0.956513 | | 10000 | 0.022500 | 0.222881 | 0.952443 | 0.999494 | 0.946227 | 0.972132 | | 15000 | 0.016600 | 0.634102 | 0.882638 | 0.999789 | 0.866301 | 0.928270 | | 20000 | 0.011300 | 1.138026 | 0.849013 | 0.999796 | 0.827928 | 0.905781 | | 25000 | 0.010300 | 0.623522 | 0.895619 | 0.999728 | 0.881166 | 0.936710 | | 30000 | 0.006300 | 0.776632 | 0.879492 | 0.999804 | 0.862697 | 0.926204 | | 35000 | 0.000500 | 0.704599 | 0.899149 | 0.999698 | 0.885220 | 0.938982 |
Nurr/wav2vec2-base-finetuned-ks
8ad4231f56a31c05278959335573972855300dcb
2022-05-13T04:03:38.000Z
[ "pytorch", "tensorboard", "wav2vec2", "audio-classification", "dataset:superb", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
audio-classification
false
Nurr
null
Nurr/wav2vec2-base-finetuned-ks
5
null
transformers
17,201
--- license: apache-2.0 tags: - generated_from_trainer datasets: - superb model-index: - name: wav2vec2-base-finetuned-ks results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-ks This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.14.0 - Tokenizers 0.10.3
yogeshchandrasekharuni/bart-paraphrase-finetuned-xsum
28f79c65c93ac520470b1873f3cc278b0924451d
2022-05-13T11:12:28.000Z
[ "pytorch", "tensorboard", "bart", "text2text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
yogeshchandrasekharuni
null
yogeshchandrasekharuni/bart-paraphrase-finetuned-xsum
5
null
transformers
17,202
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bart-paraphrase-finetuned-xsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-paraphrase-finetuned-xsum This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 61 | 1.1215 | 70.9729 | 60.41 | 70.2648 | 70.2724 | 12.2295 | ### Framework versions - Transformers 4.19.0 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
ali-issa/wav2vec2-Arabizi-gpu-colab-similar-to-german-param-more-dataset-more-epochs
b3ebb2d94567579bcaf14cfc61d7e4bcbd92e7cb
2022-05-13T19:50:44.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
ali-issa
null
ali-issa/wav2vec2-Arabizi-gpu-colab-similar-to-german-param-more-dataset-more-epochs
5
null
transformers
17,203
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-Arabizi-gpu-colab-similar-to-german-param-more-dataset-more-epochs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-Arabizi-gpu-colab-similar-to-german-param-more-dataset-more-epochs This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7230 - Wer: 0.4010 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 6 - total_train_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.7329 | 2.41 | 400 | 2.9049 | 1.0 | | 1.5315 | 4.82 | 800 | 0.6905 | 0.6211 | | 0.664 | 7.23 | 1200 | 0.5894 | 0.5038 | | 0.4908 | 9.64 | 1600 | 0.5510 | 0.4650 | | 0.4032 | 12.05 | 2000 | 0.5679 | 0.4435 | | 0.3406 | 14.46 | 2400 | 0.5652 | 0.4273 | | 0.3049 | 16.86 | 2800 | 0.5747 | 0.4252 | | 0.2682 | 19.28 | 3200 | 0.5942 | 0.4187 | | 0.2454 | 21.68 | 3600 | 0.5892 | 0.4171 | | 0.23 | 24.1 | 4000 | 0.6241 | 0.4160 | | 0.2113 | 26.5 | 4400 | 0.6336 | 0.4150 | | 0.1988 | 28.91 | 4800 | 0.6689 | 0.4117 | | 0.1816 | 31.32 | 5200 | 0.6750 | 0.4117 | | 0.1779 | 33.73 | 5600 | 0.6783 | 0.3983 | | 0.1682 | 36.14 | 6000 | 0.6797 | 0.3988 | | 0.1638 | 38.55 | 6400 | 0.7061 | 0.3988 | | 0.1548 | 40.96 | 6800 | 0.7083 | 0.3961 | | 0.152 | 43.37 | 7200 | 0.7151 | 0.4069 | | 0.1509 | 45.78 | 7600 | 0.7083 | 0.4058 | | 0.1414 | 48.19 | 8000 | 0.7230 | 0.4010 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
renjithks/distilbert-expense-ner
4dcaf3a3379b5d795b6f4d615e2c92105df1fe68
2022-05-26T05:50:54.000Z
[ "pytorch", "tensorboard", "distilbert", "token-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
renjithks
null
renjithks/distilbert-expense-ner
5
null
transformers
17,204
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-expense-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-expense-ner This model is a fine-tuned version of [renjithks/distilbert-cord-ner](https://huggingface.co/renjithks/distilbert-cord-ner) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2930 - Precision: 0.5096 - Recall: 0.4852 - F1: 0.4971 - Accuracy: 0.9275 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 22 | 0.3635 | 0.2888 | 0.0945 | 0.1424 | 0.8866 | | No log | 2.0 | 44 | 0.2795 | 0.3213 | 0.3018 | 0.3113 | 0.8982 | | No log | 3.0 | 66 | 0.2432 | 0.4243 | 0.4034 | 0.4136 | 0.9161 | | No log | 4.0 | 88 | 0.2446 | 0.4615 | 0.4654 | 0.4635 | 0.9193 | | No log | 5.0 | 110 | 0.2410 | 0.5143 | 0.4810 | 0.4971 | 0.9293 | | No log | 6.0 | 132 | 0.2598 | 0.5283 | 0.4612 | 0.4925 | 0.9305 | | No log | 7.0 | 154 | 0.2963 | 0.5230 | 0.4485 | 0.4829 | 0.9268 | | No log | 8.0 | 176 | 0.2753 | 0.4928 | 0.4838 | 0.4883 | 0.9283 | | No log | 9.0 | 198 | 0.2897 | 0.5194 | 0.4725 | 0.4948 | 0.9295 | | No log | 10.0 | 220 | 0.2930 | 0.5096 | 0.4852 | 0.4971 | 0.9275 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Jeevesh8/6ep_bert_ft_cola-5
d4c3c853e42af6bfc7819bb48fbd94b38f127da9
2022-05-14T11:41:15.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/6ep_bert_ft_cola-5
5
null
transformers
17,205
Entry not found
Jeevesh8/6ep_bert_ft_cola-23
8c830030485f119dd949f94fe71c4ac03dd7ef15
2022-05-14T12:36:33.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/6ep_bert_ft_cola-23
5
null
transformers
17,206
Entry not found
Jeevesh8/6ep_bert_ft_cola-28
4a22c0fcbfcfaedcbdf04b92068d22346281eaf5
2022-05-14T12:44:56.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/6ep_bert_ft_cola-28
5
null
transformers
17,207
Entry not found
Jeevesh8/6ep_bert_ft_cola-52
3d95b516d1de3fa8ccf8a48c259571eb1270bf4c
2022-05-14T13:25:40.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/6ep_bert_ft_cola-52
5
null
transformers
17,208
Entry not found
Jeevesh8/6ep_bert_ft_cola-54
22ee9e994badef956f2d829b632d13a367d8e8ad
2022-05-14T13:29:01.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/6ep_bert_ft_cola-54
5
null
transformers
17,209
Entry not found
Jeevesh8/6ep_bert_ft_cola-55
32a4567119d02d8fabd79018600f2caeb52d4c03
2022-05-14T13:30:40.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/6ep_bert_ft_cola-55
5
null
transformers
17,210
Entry not found
Jeevesh8/6ep_bert_ft_cola-61
5d158607889396dd217cb6ed7ae44d9a6ed66835
2022-05-14T13:40:40.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/6ep_bert_ft_cola-61
5
null
transformers
17,211
Entry not found
Jeevesh8/6ep_bert_ft_cola-62
ded455f6dce2350ee480a036237db5ccd81fe8e1
2022-05-14T13:42:20.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/6ep_bert_ft_cola-62
5
null
transformers
17,212
Entry not found
Jeevesh8/6ep_bert_ft_cola-63
b5e6de8e63ee8072de1e80339689cf15a0e7433f
2022-05-14T13:43:59.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/6ep_bert_ft_cola-63
5
null
transformers
17,213
Entry not found
Jeevesh8/6ep_bert_ft_cola-69
8ff359a94514ccc7e8e8d8b1f9e0df24ba455071
2022-05-14T13:53:59.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/6ep_bert_ft_cola-69
5
null
transformers
17,214
Entry not found
Jeevesh8/6ep_bert_ft_cola-75
9052450577e0f2c59fcc517677ee76efb73ada16
2022-05-14T14:04:01.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/6ep_bert_ft_cola-75
5
null
transformers
17,215
Entry not found
Jeevesh8/6ep_bert_ft_cola-76
6e37f5d5af4afdb00a53a70add00efeda9ead055
2022-05-14T14:05:44.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/6ep_bert_ft_cola-76
5
null
transformers
17,216
Entry not found
Jeevesh8/6ep_bert_ft_cola-77
50f18eadcb1276b560de31f6d52e56d7f3678c07
2022-05-14T14:07:26.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/6ep_bert_ft_cola-77
5
null
transformers
17,217
Entry not found
Jeevesh8/6ep_bert_ft_cola-80
f6fb9282f95ab0330d6596b353a8b31343543b8d
2022-05-14T14:12:26.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/6ep_bert_ft_cola-80
5
null
transformers
17,218
Entry not found
Jeevesh8/6ep_bert_ft_cola-81
50f5f4504f7afd34c710ef5774a1d2d654dc6f70
2022-05-14T14:14:06.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/6ep_bert_ft_cola-81
5
null
transformers
17,219
Entry not found
Jeevesh8/6ep_bert_ft_cola-87
e3c0432b304af5e833e880758478c5042426fd5f
2022-05-14T14:24:07.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/6ep_bert_ft_cola-87
5
null
transformers
17,220
Entry not found
akreal/mbart-large-50-finetuned-slurp
0ea0a1a086f3416db016bc134b698088ffed3e5c
2022-05-14T16:36:01.000Z
[ "pytorch", "mbart", "text2text-generation", "en", "dataset:SLURP", "transformers", "mbart-50", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
akreal
null
akreal/mbart-large-50-finetuned-slurp
5
null
transformers
17,221
--- language: - en tags: - mbart-50 license: apache-2.0 datasets: - SLURP metrics: - accuracy - slu-f1 --- This model is `mbart-large-50-many-to-many-mmt` model fine-tuned on the text part of [SLURP](https://github.com/pswietojanski/slurp) spoken language understanding dataset. The scores on the test set are 85.68% and 79.00% for Intent accuracy and SLU-F1 respectively.
danieleV9H/hubert-base-libri-clean-ft100h
0d9392fb07685e8b5a459601ae9a394d0b85974a
2022-05-15T05:47:23.000Z
[ "pytorch", "tensorboard", "hubert", "automatic-speech-recognition", "dataset:librispeech_asr", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
danieleV9H
null
danieleV9H/hubert-base-libri-clean-ft100h
5
null
transformers
17,222
--- license: apache-2.0 tags: - generated_from_trainer datasets: - librispeech_asr model-index: - name: hubert-base-libri-clean-ft100h results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hubert-base-libri-clean-ft100h This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 0.1324 - Wer: 0.1597 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.14 | 250 | 4.1508 | 1.0000 | | 4.4345 | 0.28 | 500 | 3.8766 | 1.0000 | | 4.4345 | 0.42 | 750 | 3.4376 | 1.0000 | | 2.8475 | 0.56 | 1000 | 2.7380 | 1.0 | | 2.8475 | 0.7 | 1250 | 0.8803 | 0.6766 | | 1.1877 | 0.84 | 1500 | 0.5671 | 0.5102 | | 1.1877 | 0.98 | 1750 | 0.4537 | 0.4388 | | 0.5802 | 1.12 | 2000 | 0.3566 | 0.3740 | | 0.5802 | 1.26 | 2250 | 0.2925 | 0.3209 | | 0.4301 | 1.4 | 2500 | 0.2613 | 0.2952 | | 0.4301 | 1.54 | 2750 | 0.2363 | 0.2715 | | 0.3591 | 1.68 | 3000 | 0.2155 | 0.2552 | | 0.3591 | 1.82 | 3250 | 0.2062 | 0.2418 | | 0.3015 | 1.96 | 3500 | 0.1951 | 0.2308 | | 0.3015 | 2.1 | 3750 | 0.1842 | 0.2207 | | 0.2698 | 2.24 | 4000 | 0.1900 | 0.2112 | | 0.2698 | 2.38 | 4250 | 0.1745 | 0.2048 | | 0.2561 | 2.52 | 4500 | 0.1718 | 0.2040 | | 0.2561 | 2.66 | 4750 | 0.1625 | 0.1939 | | 0.2348 | 2.8 | 5000 | 0.1568 | 0.1867 | | 0.2348 | 2.94 | 5250 | 0.1517 | 0.1855 | | 0.2278 | 3.08 | 5500 | 0.1501 | 0.1807 | | 0.2278 | 3.22 | 5750 | 0.1445 | 0.1772 | | 0.2166 | 3.36 | 6000 | 0.1422 | 0.1752 | | 0.2166 | 3.5 | 6250 | 0.1418 | 0.1741 | | 0.2017 | 3.64 | 6500 | 0.1404 | 0.1695 | | 0.2017 | 3.78 | 6750 | 0.1356 | 0.1674 | | 0.1922 | 3.92 | 7000 | 0.1350 | 0.1688 | | 0.1922 | 4.06 | 7250 | 0.1346 | 0.1638 | | 0.1979 | 4.2 | 7500 | 0.1359 | 0.1638 | | 0.1979 | 4.34 | 7750 | 0.1336 | 0.1612 | | 0.1836 | 4.48 | 8000 | 0.1324 | 0.1613 | | 0.1836 | 4.62 | 8250 | 0.1320 | 0.1606 | | 0.1891 | 4.76 | 8500 | 0.1325 | 0.1598 | | 0.1891 | 4.9 | 8750 | 0.1324 | 0.1597 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
anuj55/distilbert-base-uncased-finetuned-mrpc
b4704d564ba0f2d476fade16af9904ead98ae498
2022-05-15T10:45:54.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
anuj55
null
anuj55/distilbert-base-uncased-finetuned-mrpc
5
null
transformers
17,223
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-mrpc results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8480392156862745 - name: F1 type: f1 value: 0.8945578231292517 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mrpc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6236 - Accuracy: 0.8480 - F1: 0.8946 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 230 | 0.4371 | 0.8137 | 0.8746 | | No log | 2.0 | 460 | 0.4117 | 0.8431 | 0.8940 | | 0.4509 | 3.0 | 690 | 0.3943 | 0.8431 | 0.8908 | | 0.4509 | 4.0 | 920 | 0.5686 | 0.8382 | 0.8893 | | 0.1915 | 5.0 | 1150 | 0.6236 | 0.8480 | 0.8946 | ### Framework versions - Transformers 4.19.1 - Pytorch 1.8.1+cu102 - Datasets 1.18.4 - Tokenizers 0.12.1
aliosm/sha3bor-generator-aragpt2-base
3f2864cebe1c7fd8b4bf9f23f5dcfce38346b9d3
2022-05-28T09:17:13.000Z
[ "pytorch", "gpt2", "text-generation", "ar", "transformers", "license:mit" ]
text-generation
false
aliosm
null
aliosm/sha3bor-generator-aragpt2-base
5
null
transformers
17,224
--- language: ar license: mit widget: - text: "حبيبي" example_title: "حبيبي" - text: "يا" example_title: "يا" - text: "رسول الله" example_title: "رسول الله" ---
anuj55/deberta-v3-base-finetuned-polifact
10d90263444a9aed04c577afb0ec2bf39f5d9d4a
2022-05-15T17:47:32.000Z
[ "pytorch", "tensorboard", "deberta-v2", "text-classification", "transformers" ]
text-classification
false
anuj55
null
anuj55/deberta-v3-base-finetuned-polifact
5
null
transformers
17,225
Entry not found
aliosm/sha3bor-metre-detector-arabertv2-base
caf3e348c74cfe4eb79588457bc419c421dafcdd
2022-05-28T09:33:59.000Z
[ "pytorch", "bert", "text-classification", "ar", "transformers", "license:mit" ]
text-classification
false
aliosm
null
aliosm/sha3bor-metre-detector-arabertv2-base
5
null
transformers
17,226
--- language: ar license: mit widget: - text: "إن العيون التي في طرفها حور [شطر] قتلننا ثم لم يحيين قتلانا" - text: "إذا ما فعلت الخير ضوعف شرهم [شطر] وكل إناء بالذي فيه ينضح" - text: "واحر قلباه ممن قلبه شبم [شطر] ومن بجسمي وحالي عنده سقم" ---
IMSyPP/hate_speech_targets_nl
1df0dbd02175c0d93f598af9e0a515f2d82713d2
2022-05-16T04:49:35.000Z
[ "pytorch", "distilbert", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
false
IMSyPP
null
IMSyPP/hate_speech_targets_nl
5
null
transformers
17,227
--- language: - nl license: mit --- # Hate Speech Target Classifier for Social Media Content in Dutch A monolingual model for hate speech target classification of social media content in Dutch. The model was trained on 20000 social media posts (youtube, twitter, facebook) and tested on an independent test set of 2000 posts. It is based on the pre-trained language model [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased). ## Tokenizer During training the text was preprocessed using the Distilbert tokenizer. We suggest the same tokenizer is used for inference. ## Model output The model classifies each input into one of four distinct classes: * 0 - HOMOPHOBIA * 1 - OTHER * 2 - RELIGION * 3 - ANTISEMITISM * 4 - IDEOLOGY * 5 - MIGRANTS * 6 - POLITICS * 7 - RACISM * 8 - MEDIA * 9 - ISLAMOPHOBIA * 10 - INDIVIDUAL * 11 - SEXISM
huawei-noah/AutoTinyBERT-KD-S1
b25716ef98cb0ca8bdb2c2885ecda3e56865facb
2022-05-16T15:09:32.000Z
[ "pytorch", "transformers", "license:other" ]
null
false
huawei-noah
null
huawei-noah/AutoTinyBERT-KD-S1
5
null
transformers
17,228
--- license: other --- Pre-trained language models (PLMs) have achieved great success in natural language processing. Most of PLMs follow the default setting of architecture hyper-parameters (e.g., the hidden dimension is a quarter of the intermediate dimension in feed-forward sub-networks) in BERT. In this paper, we adopt the one-shot Neural Architecture Search (NAS) to automatically search architecture hyper-parameters for efficient pre-trained language models (at least 6x faster than BERT-base). AutoTinyBERT provides a model zoo that can meet different latency requirements.
huawei-noah/AutoTinyBERT-KD-S2
45226d3574f75dad94550cb4551ea2421ba2b66c
2022-05-16T15:11:57.000Z
[ "pytorch", "transformers" ]
null
false
huawei-noah
null
huawei-noah/AutoTinyBERT-KD-S2
5
null
transformers
17,229
Pre-trained language models (PLMs) have achieved great success in natural language processing. Most of PLMs follow the default setting of architecture hyper-parameters (e.g., the hidden dimension is a quarter of the intermediate dimension in feed-forward sub-networks) in BERT. In this paper, we adopt the one-shot Neural Architecture Search (NAS) to automatically search architecture hyper-parameters for efficient pre-trained language models (at least 6x faster than BERT-base). AutoTinyBERT provides a model zoo that can meet different latency requirements.
ali-issa/4-wav2vec2-arabiizi-gpu-colab-more-dataset12
0d78ae92504aaa82bd8f233eb9e0e89873193fb4
2022-05-17T02:28:13.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
ali-issa
null
ali-issa/4-wav2vec2-arabiizi-gpu-colab-more-dataset12
5
null
transformers
17,230
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-arabiizi-gpu-colab-more-dataset12 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-arabiizi-gpu-colab-more-dataset12 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6032 - Wer: 0.4112 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 6 - total_train_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.842 | 2.41 | 400 | 2.8828 | 1.0 | | 1.5726 | 4.82 | 800 | 0.6972 | 0.6254 | | 0.7292 | 7.23 | 1200 | 0.5717 | 0.5280 | | 0.5706 | 9.64 | 1600 | 0.5367 | 0.4795 | | 0.483 | 12.05 | 2000 | 0.5773 | 0.4677 | | 0.418 | 14.46 | 2400 | 0.5391 | 0.4537 | | 0.3823 | 16.86 | 2800 | 0.6134 | 0.4386 | | 0.3489 | 19.28 | 3200 | 0.5776 | 0.4360 | | 0.3227 | 21.68 | 3600 | 0.5890 | 0.4489 | | 0.2999 | 24.1 | 4000 | 0.5882 | 0.4209 | | 0.2841 | 26.5 | 4400 | 0.5843 | 0.4150 | | 0.2729 | 28.91 | 4800 | 0.5793 | 0.4279 | | 0.2603 | 31.32 | 5200 | 0.6003 | 0.4209 | | 0.2481 | 33.73 | 5600 | 0.6122 | 0.4128 | | 0.2405 | 36.14 | 6000 | 0.6137 | 0.4177 | | 0.24 | 38.55 | 6400 | 0.6032 | 0.4112 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
PSW/cnndm_0.5percent_randomsimdel_seed42
5212e545560fc4315d2a9e468e8a097ab093d550
2022-05-17T03:26:23.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/cnndm_0.5percent_randomsimdel_seed42
5
null
transformers
17,231
Entry not found
PSW/cnndm_0.5percent_minsimins_seed42
b13d030e6f552ab8f51e1f9854d38752485c0680
2022-05-17T06:59:03.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/cnndm_0.5percent_minsimins_seed42
5
null
transformers
17,232
Entry not found
Danni/distilbert-base-uncased-finetuned-dbpedia-0517
c60e5e3865195d4c90c8b65b4d4b66a33a2113ef
2022-05-17T07:39:35.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers" ]
text-classification
false
Danni
null
Danni/distilbert-base-uncased-finetuned-dbpedia-0517
5
null
transformers
17,233
Entry not found
nielsr/groupvit-gcc-yfcc-old
3ffdb7c8166864c2fab6534a3c1a871bacdd7805
2022-06-08T17:53:52.000Z
[ "pytorch", "groupvit", "feature-extraction", "transformers", "license:apache-2.0" ]
feature-extraction
false
nielsr
null
nielsr/groupvit-gcc-yfcc-old
5
null
transformers
17,234
--- license: apache-2.0 ---
TrevorAshby/WoW-1hr
aed14bcbe0ccde064fcf7b3f5b0123f3d2dfc24d
2022-05-18T02:31:20.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
TrevorAshby
null
TrevorAshby/WoW-1hr
5
null
transformers
17,235
Entry not found
taskydata/DeBERTa-v3-512
b9fe6e9e3f46377c2a10ecd9b87f373f1d846197
2022-05-30T08:37:18.000Z
[ "pytorch", "deberta-v2", "text-classification", "transformers", "license:apache-2.0" ]
text-classification
false
taskydata
null
taskydata/DeBERTa-v3-512
5
null
transformers
17,236
--- license: apache-2.0 --- **Hyperparameters:** - learning rate: 2e-5 - weight decay: 0.01 - per_device_train_batch_size: 8 - per_device_eval_batch_size: 8 - gradient_accumulation_steps:1 - eval steps: 6000 - max_length: 512 - num_epochs: 2 **Dataset version:** - “craffel/tasky_or_not”, “10xp3_10xc4”, “15f88c8” **Checkpoint:** - 48000 steps **Results on Validation set:** | Step | Training Loss | Validation Loss | Accuracy | Precision | Recall | F1 | |-------|---------------|-----------------|----------|-----------|----------|----------| | 6000 | 0.031900 | 0.163412 | 0.982194 | 0.999211 | 0.980462 | 0.989748 | | 12000 | 0.014700 | 0.106132 | 0.976666 | 0.999639 | 0.973733 | 0.986516 | | 18000 | 0.010700 | 0.043012 | 0.995743 | 0.999223 | 0.995918 | 0.997568 | | 24000 | 0.007400 | 0.095047 | 0.984724 | 0.999857 | 0.982714 | 0.991211 | | 30000 | 0.004100 | 0.087274 | 0.990400 | 0.999829 | 0.989217 | 0.994495 | | 36000 | 0.003100 | 0.162909 | 0.981972 | 1.000000 | 0.979434 | 0.989610 | | 42000 | 0.002200 | 0.148721 | 0.980454 | 0.999986 | 0.977717 | 0.988726 | | 48000 | 0.001000 | 0.094455 | 0.990437 | 0.999943 | 0.989147 | 0.994516 |
khanhnguyen/wav2vec2-base-librispeech-demo-colab
24e443f6cac7ab3b2ee529d969b1787812924a8d
2022-05-19T03:39:03.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
khanhnguyen
null
khanhnguyen/wav2vec2-base-librispeech-demo-colab
5
null
transformers
17,237
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-librispeech-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-librispeech-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.10.3
badrou1/test_rex_model
f879b86153dd4e4138a4de25fd0cbb4e633e8b02
2022-05-18T15:58:28.000Z
[ "pytorch", "bert", "text-classification", "transformers", "license:other" ]
text-classification
false
badrou1
null
badrou1/test_rex_model
5
null
transformers
17,238
--- license: other ---
Jeevesh8/512seq_len_6ep_bert_ft_cola-3
3fa16aa5a11c8f167c5d5ed0bfe81aee09b0ebdf
2022-05-18T18:23:21.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/512seq_len_6ep_bert_ft_cola-3
5
null
transformers
17,239
Entry not found
Jeevesh8/512seq_len_6ep_bert_ft_cola-72
fe56dd21a8736c7c57d2e58565ed76ed42656f13
2022-05-18T18:56:50.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/512seq_len_6ep_bert_ft_cola-72
5
null
transformers
17,240
Entry not found
Jeevesh8/512seq_len_6ep_bert_ft_cola-73
5651f1bb3ff3e26338418e8b77c7a12d1f158b68
2022-05-18T18:58:39.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/512seq_len_6ep_bert_ft_cola-73
5
null
transformers
17,241
Entry not found
Jeevesh8/512seq_len_6ep_bert_ft_cola-74
a072ba1e7ee8755c05e024d4615b2b1f14978941
2022-05-18T19:00:26.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/512seq_len_6ep_bert_ft_cola-74
5
null
transformers
17,242
Entry not found
Jeevesh8/512seq_len_6ep_bert_ft_cola-76
42babe37ad2fe8440c5c2d0a2646fdd3f8b18838
2022-05-18T19:04:06.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/512seq_len_6ep_bert_ft_cola-76
5
null
transformers
17,243
Entry not found
Jeevesh8/512seq_len_6ep_bert_ft_cola-78
979aaf66866963651034a12572c9413c0b8ee37d
2022-05-18T19:07:43.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/512seq_len_6ep_bert_ft_cola-78
5
null
transformers
17,244
Entry not found
Jeevesh8/512seq_len_6ep_bert_ft_cola-79
74479406f48e6ac5f09f92ea570dd5f76996f9f6
2022-05-18T19:09:34.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/512seq_len_6ep_bert_ft_cola-79
5
null
transformers
17,245
Entry not found
Jeevesh8/512seq_len_6ep_bert_ft_cola-80
4a79588c988567aca9ebb567c912b34054906f8d
2022-05-18T19:11:23.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/512seq_len_6ep_bert_ft_cola-80
5
null
transformers
17,246
Entry not found
Jeevesh8/512seq_len_6ep_bert_ft_cola-85
11d7ca0404f55cced6d0c92e204283b81e34c70f
2022-05-18T19:20:26.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/512seq_len_6ep_bert_ft_cola-85
5
null
transformers
17,247
Entry not found
Jeevesh8/512seq_len_6ep_bert_ft_cola-86
af8b827d91a065900c2b3922d22cf699a411d168
2022-05-18T19:22:13.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/512seq_len_6ep_bert_ft_cola-86
5
null
transformers
17,248
Entry not found
Jeevesh8/512seq_len_6ep_bert_ft_cola-99
6824b9615bf5c730857605e7903047c96a93ac49
2022-05-18T19:39:33.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/512seq_len_6ep_bert_ft_cola-99
5
null
transformers
17,249
Entry not found
hungchiayu/distilbert-base-uncased-finetuned-emotion
32ecbb4dc25754a8903ff5c16fbf85928e6402ea
2022-05-19T16:59:14.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers" ]
text-classification
false
hungchiayu
null
hungchiayu/distilbert-base-uncased-finetuned-emotion
5
null
transformers
17,250
Entry not found
PriaPillai/distilbert-base-uncased-finetuned-query
eb035a21bfc832a0a3c351ba788048fc316e0216
2022-06-01T17:44:17.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
PriaPillai
null
PriaPillai/distilbert-base-uncased-finetuned-query
5
null
transformers
17,251
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-query results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-query This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3668 - Accuracy: 0.8936 - F1: 0.8924 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6511 | 1.0 | 30 | 0.5878 | 0.7234 | 0.6985 | | 0.499 | 2.0 | 60 | 0.4520 | 0.8723 | 0.8683 | | 0.3169 | 3.0 | 90 | 0.3668 | 0.8936 | 0.8924 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
mlnotes/ecnet_seed_42
ac14cdfe919eaf8ca36eee82a453632ea5ac0e56
2022-05-19T20:01:05.000Z
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
mlnotes
null
mlnotes/ecnet_seed_42
5
null
transformers
17,252
Entry not found
tamarab/bert-emotion
22be9f5e4cca71161b273fc7be79650ee48045a6
2022-05-20T19:12:14.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:tweet_eval", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
tamarab
null
tamarab/bert-emotion
5
null
transformers
17,253
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - precision - recall model-index: - name: bert-emotion results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval args: emotion metrics: - name: Precision type: precision value: 0.7462955517135084 - name: Recall type: recall value: 0.7095634380533169 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-emotion This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.1347 - Precision: 0.7463 - Recall: 0.7096 - Fscore: 0.7209 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 0.8385 | 1.0 | 815 | 0.8366 | 0.7865 | 0.5968 | 0.6014 | | 0.5451 | 2.0 | 1630 | 0.9301 | 0.7301 | 0.6826 | 0.6947 | | 0.2447 | 3.0 | 2445 | 1.1347 | 0.7463 | 0.7096 | 0.7209 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
anas-awadalla/albert-xxl-v2-finetuned-squad
ddcda8687ada42f00e4ed1b132f56b06bb381f1e
2022-05-21T08:02:10.000Z
[ "pytorch", "albert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
anas-awadalla
null
anas-awadalla/albert-xxl-v2-finetuned-squad
5
1
transformers
17,254
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: albert-xxl-v2-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-xxl-v2-finetuned-squad This model is a fine-tuned version of [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
connectivity/feather_berts_14
21dda72ecce9b507acd48c987cf751f315401788
2022-05-21T14:27:52.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_14
5
null
transformers
17,255
Entry not found
connectivity/feather_berts_17
458e3af6fe8ee0585b5a5c6511be12c96b7ecdbc
2022-05-21T14:27:58.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_17
5
null
transformers
17,256
Entry not found
connectivity/feather_berts_18
8bfae38a21fc6397933c36cc0e08bbbb0b5cfe24
2022-05-21T14:28:00.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_18
5
null
transformers
17,257
Entry not found
connectivity/feather_berts_19
7ccf34a23dc444b99e84072fcb6f3bb5b65b6bb6
2022-05-21T14:28:01.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_19
5
null
transformers
17,258
Entry not found
connectivity/feather_berts_20
9fd6a642e74de5e7bf325251426fa6e0f0fbb07c
2022-05-21T14:28:04.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_20
5
null
transformers
17,259
Entry not found
connectivity/feather_berts_21
6de8226ab99577fd8ea77dbeda4efc6f85606421
2022-05-21T14:28:05.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_21
5
null
transformers
17,260
Entry not found
connectivity/feather_berts_22
2419db36fd64c8c1e181fc2cadecb9fdcf6e7090
2022-05-21T14:28:09.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_22
5
null
transformers
17,261
Entry not found
connectivity/feather_berts_24
e0b6387a5c05496a401b0d7a9dfad6c09edeb3e4
2022-05-21T14:28:15.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_24
5
null
transformers
17,262
Entry not found
connectivity/feather_berts_26
7c6b79baad927c399680ab83347a6e8439cb1211
2022-05-21T14:28:18.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_26
5
null
transformers
17,263
Entry not found
connectivity/feather_berts_27
4bf9fc7a99d9f8530e32832f81af6f6c53622be2
2022-05-21T14:28:21.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_27
5
null
transformers
17,264
Entry not found
connectivity/feather_berts_29
47f3449823a1fe51674ff6b59da408d4889e4b3d
2022-05-21T14:28:25.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_29
5
null
transformers
17,265
Entry not found
connectivity/feather_berts_37
529f4baeb816fe1498135bd3019453df1aff5ba6
2022-05-21T14:28:39.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_37
5
null
transformers
17,266
Entry not found
connectivity/feather_berts_38
959b04bf7851a7d6fed225b72e4bc4273b6eb385
2022-05-21T14:28:41.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_38
5
null
transformers
17,267
Entry not found
connectivity/feather_berts_42
99b0fb3f65443d1998ad075deda8262bae4a13aa
2022-05-21T14:28:50.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_42
5
null
transformers
17,268
Entry not found
connectivity/feather_berts_45
a81693fa5793cdfa528d0d4268b7c3c2790e1f84
2022-05-21T14:28:55.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_45
5
null
transformers
17,269
Entry not found
connectivity/feather_berts_61
3b353ee28648f0805d07dd92e056d265805fc057
2022-05-21T14:29:23.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_61
5
null
transformers
17,270
Entry not found
connectivity/feather_berts_64
11aad5eeef57a57fed7a5c95e453883dbaa7749a
2022-05-21T14:29:28.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_64
5
null
transformers
17,271
Entry not found
connectivity/feather_berts_66
44ea929e39ea206be3f62e488fbb86133e2677cb
2022-05-21T14:29:33.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_66
5
null
transformers
17,272
Entry not found
connectivity/feather_berts_67
003b90967d16ce444493a0aee048577fcf4fdb78
2022-05-21T14:29:35.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_67
5
null
transformers
17,273
Entry not found
connectivity/feather_berts_68
670615dc37bc30f2ab602257b859943f356d8af8
2022-05-21T14:29:38.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_68
5
null
transformers
17,274
Entry not found
connectivity/feather_berts_69
c17ba78568cd8e885db40cf7c81ffaa92c8a4b5f
2022-05-21T14:29:40.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_69
5
null
transformers
17,275
Entry not found
connectivity/feather_berts_70
39c17ffcdfc94e012ead07396e18ca296323510c
2022-05-21T14:29:42.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_70
5
null
transformers
17,276
Entry not found
connectivity/feather_berts_73
e1dc1ed8c82aaa525f06f036ba2184cd29befb4f
2022-05-21T14:29:47.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_73
5
null
transformers
17,277
Entry not found
connectivity/feather_berts_74
e3fffa7e65f591ae56bd0fc33fc33aa0615a3a31
2022-05-21T14:30:10.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_74
5
null
transformers
17,278
Entry not found
connectivity/feather_berts_75
3f11d1c08b8c7350aa2ba7133db8b1561286ac55
2022-05-21T14:30:11.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_75
5
null
transformers
17,279
Entry not found
connectivity/feather_berts_77
5c0f2d52f1866c74e4b5416e59c21bd786d111b9
2022-05-21T14:30:16.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_77
5
null
transformers
17,280
Entry not found
connectivity/feather_berts_78
02359a1671342ec5944a4ecffb9fb603f517279b
2022-05-21T14:30:18.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_78
5
null
transformers
17,281
Entry not found
connectivity/feather_berts_79
00097a832142c869cd1a387aff65a924cf89f2e1
2022-05-21T14:30:19.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_79
5
null
transformers
17,282
Entry not found
connectivity/feather_berts_82
eae7f31d8a4c5b15e582ee1d85a81dceeabbbe9f
2022-05-21T14:30:26.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_82
5
null
transformers
17,283
Entry not found
connectivity/feather_berts_85
adbbb9d835befd5eb7fe8d42b3275d4f80cbf03c
2022-05-21T14:30:35.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_85
5
null
transformers
17,284
Entry not found
connectivity/feather_berts_86
53efbd8d385adcd7e8624944e61fcaf5ed5cbdf9
2022-05-21T14:30:37.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_86
5
null
transformers
17,285
Entry not found
connectivity/feather_berts_92
4fbcee24525a6fe9a106869a676fcbd6fb613f95
2022-05-21T14:30:53.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_92
5
null
transformers
17,286
Entry not found
connectivity/feather_berts_94
84fd220af0574b976c651a49ca82403a1bed4bba
2022-05-21T14:30:56.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_94
5
null
transformers
17,287
Entry not found
connectivity/bert_ft_qqp-8
d48dc43fe738a205bdd2a16324a3062743c16937
2022-05-21T16:31:36.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/bert_ft_qqp-8
5
null
transformers
17,288
Entry not found
connectivity/bert_ft_qqp-12
d8a48e8116604ee8186890934b69c0701cf2ef06
2022-05-21T16:31:54.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/bert_ft_qqp-12
5
null
transformers
17,289
Entry not found
connectivity/bert_ft_qqp-13
d2c9fe28406ffd164843a2f062480bac92eba3fb
2022-05-21T16:32:00.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/bert_ft_qqp-13
5
null
transformers
17,290
Entry not found
connectivity/bert_ft_qqp-14
ebed33b45fb33b6db8723c2eeb16c8a0dd347a4c
2022-05-21T16:32:06.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/bert_ft_qqp-14
5
null
transformers
17,291
Entry not found
connectivity/bert_ft_qqp-15
deeb6c5cb4ea1f7f878697f766ecd7be98a73ccd
2022-05-21T16:32:11.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/bert_ft_qqp-15
5
null
transformers
17,292
Entry not found
connectivity/bert_ft_qqp-23
781cf0e05f989f03213614a470024236eb60d0a5
2022-05-21T16:32:47.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/bert_ft_qqp-23
5
null
transformers
17,293
Entry not found
connectivity/bert_ft_qqp-24
a0e8b13a69e1fd1c668a0ee14306812a7c67d4bb
2022-05-21T16:32:50.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/bert_ft_qqp-24
5
null
transformers
17,294
Entry not found
connectivity/bert_ft_qqp-28
55ea30fe3fd996c86f6a43741b4cf12bd2250dd8
2022-05-21T16:33:10.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/bert_ft_qqp-28
5
null
transformers
17,295
Entry not found
connectivity/bert_ft_qqp-31
5fbc1298cae2e3994d75b7590a1824c6c21c1d02
2022-05-21T16:33:25.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/bert_ft_qqp-31
5
null
transformers
17,296
Entry not found
connectivity/bert_ft_qqp-35
48b3244e97bc2650efd1885f704c0a3a9d5ea73f
2022-05-21T16:33:40.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/bert_ft_qqp-35
5
null
transformers
17,297
Entry not found
laurens88/finetuning-crypto-tweet-sentiment-test2
b8d13f4bbcb03c7e040f003e226c76529a7a99af
2022-05-21T13:19:41.000Z
[ "pytorch", "tensorboard", "roberta", "text-classification", "transformers", "generated_from_trainer", "model-index" ]
text-classification
false
laurens88
null
laurens88/finetuning-crypto-tweet-sentiment-test2
5
null
transformers
17,298
--- tags: - generated_from_trainer model-index: - name: finetuning-crypto-tweet-sentiment-test2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-crypto-tweet-sentiment-test2 This model is a fine-tuned version of [finiteautomata/bertweet-base-sentiment-analysis](https://huggingface.co/finiteautomata/bertweet-base-sentiment-analysis) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Tokenizers 0.12.1
connectivity/bert_ft_qqp-36
3fc3ad4f6eae932352e33f1eb6ab5b3f883a2a5d
2022-05-21T16:33:43.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/bert_ft_qqp-36
5
null
transformers
17,299
Entry not found