modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
yoshitomo-matsubara/bert-base-uncased-mrpc_from_bert-large-uncased-mrpc
6b609d356394e9dbeaba75cfdd0f368620682565
2021-06-03T05:03:57.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:mrpc", "transformers", "mrpc", "glue", "kd", "torchdistill", "license:apache-2.0" ]
text-classification
false
yoshitomo-matsubara
null
yoshitomo-matsubara/bert-base-uncased-mrpc_from_bert-large-uncased-mrpc
4
null
transformers
19,000
--- language: en tags: - bert - mrpc - glue - kd - torchdistill license: apache-2.0 datasets: - mrpc metrics: - f1 - accuracy --- `bert-base-uncased` fine-tuned on MRPC dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation. The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/mrpc/kd/bert_base_uncased_from_bert_large_uncased.yaml). I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
yoshitomo-matsubara/bert-base-uncased-qnli_from_bert-large-uncased-qnli
b96d3f6df6d856a3c4593bd1956f659333230884
2021-06-03T05:05:26.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:qnli", "transformers", "qnli", "glue", "kd", "torchdistill", "license:apache-2.0" ]
text-classification
false
yoshitomo-matsubara
null
yoshitomo-matsubara/bert-base-uncased-qnli_from_bert-large-uncased-qnli
4
null
transformers
19,001
--- language: en tags: - bert - qnli - glue - kd - torchdistill license: apache-2.0 datasets: - qnli metrics: - accuracy --- `bert-base-uncased` fine-tuned on QNLI dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation. The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/qnli/kd/bert_base_uncased_from_bert_large_uncased.yaml). I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
yoshitomo-matsubara/bert-base-uncased-qqp_from_bert-large-uncased-qqp
20cfc23662ac0e3a5706cdba2878ba8f5f1fe195
2021-06-03T05:06:46.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:qqp", "transformers", "qqp", "glue", "kd", "torchdistill", "license:apache-2.0" ]
text-classification
false
yoshitomo-matsubara
null
yoshitomo-matsubara/bert-base-uncased-qqp_from_bert-large-uncased-qqp
4
null
transformers
19,002
--- language: en tags: - bert - qqp - glue - kd - torchdistill license: apache-2.0 datasets: - qqp metrics: - f1 - accuracy --- `bert-base-uncased` fine-tuned on QQP dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation. The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/qqp/kd/bert_base_uncased_from_bert_large_uncased.yaml). I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
yoshitomo-matsubara/bert-base-uncased-sst2_from_bert-large-uncased-sst2
df1c05873e39d591df2dd1040e87487e86eceb70
2021-06-03T05:09:20.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:sst2", "transformers", "sst2", "glue", "kd", "torchdistill", "license:apache-2.0" ]
text-classification
false
yoshitomo-matsubara
null
yoshitomo-matsubara/bert-base-uncased-sst2_from_bert-large-uncased-sst2
4
null
transformers
19,003
--- language: en tags: - bert - sst2 - glue - kd - torchdistill license: apache-2.0 datasets: - sst2 metrics: - accuracy --- `bert-base-uncased` fine-tuned on SST-2 dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation. The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/sst2/kd/bert_base_uncased_from_bert_large_uncased.yaml). I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
yoshitomo-matsubara/bert-base-uncased-stsb_from_bert-large-uncased-stsb
62ab3f0ad56f78e946f88815fbce290cd6e4473d
2021-06-03T05:10:42.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:stsb", "transformers", "stsb", "glue", "kd", "torchdistill", "license:apache-2.0" ]
text-classification
false
yoshitomo-matsubara
null
yoshitomo-matsubara/bert-base-uncased-stsb_from_bert-large-uncased-stsb
4
null
transformers
19,004
--- language: en tags: - bert - stsb - glue - kd - torchdistill license: apache-2.0 datasets: - stsb metrics: - pearson correlation - spearman correlation --- `bert-base-uncased` fine-tuned on STS-B dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation. The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/stsb/kd/bert_base_uncased_from_bert_large_uncased.yaml). I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
yseop/text_smoothing
d2554e7535b0cc04cc6bba2fc487012f4e70e92b
2021-10-27T10:50:57.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
yseop
null
yseop/text_smoothing
4
null
transformers
19,005
Entry not found
ysslang/autonlp-test-459011902
ff56206686e6bfa3ee3cb154c129612c001a5578
2021-12-30T17:05:31.000Z
[ "pytorch", "bert", "text-classification", "zh", "dataset:ysslang/autonlp-data-test", "transformers", "autonlp", "co2_eq_emissions" ]
text-classification
false
ysslang
null
ysslang/autonlp-test-459011902
4
null
transformers
19,006
--- tags: autonlp language: zh widget: - text: "I love AutoNLP 🤗" datasets: - ysslang/autonlp-data-test co2_eq_emissions: 10.9230691350863 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 459011902 - CO2 Emissions (in grams): 10.9230691350863 ## Validation Metrics - Loss: 0.7189690470695496 - Accuracy: 0.7453263867606497 - Macro F1: 0.630810193227066 - Micro F1: 0.7453263867606497 - Weighted F1: 0.7399327942874923 - Macro Precision: 0.656237447101913 - Micro Precision: 0.7453263867606497 - Weighted Precision: 0.7410161412822164 - Macro Recall: 0.6340140718425453 - Micro Recall: 0.7453263867606497 - Weighted Recall: 0.7453263867606497 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/ysslang/autonlp-test-459011902 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ysslang/autonlp-test-459011902", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("ysslang/autonlp-test-459011902", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
zald/distilbert-base-uncased-finetuned-ner
a78c2e080bd6ec05eef5696f82cc073351b9c600
2021-08-27T16:39:55.000Z
[ "pytorch", "tensorboard", "distilbert", "token-classification", "dataset:conll2003", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
token-classification
false
zald
null
zald/distilbert-base-uncased-finetuned-ner
4
null
transformers
19,007
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model_index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metric: name: Accuracy type: accuracy value: 0.9835893688340985 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0607 - Precision: 0.9253 - Recall: 0.9350 - F1: 0.9301 - Accuracy: 0.9836 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.237 | 1.0 | 878 | 0.0701 | 0.9131 | 0.9228 | 0.9179 | 0.9809 | | 0.0509 | 2.0 | 1756 | 0.0617 | 0.9182 | 0.9333 | 0.9257 | 0.9826 | | 0.0299 | 3.0 | 2634 | 0.0607 | 0.9253 | 0.9350 | 0.9301 | 0.9836 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.11.0 - Tokenizers 0.10.3
zhangxy-2019/cunlp-gpt2-dialog
ff60f0501727bb3c5ddbcb268385694c3199ee7e
2021-05-23T14:07:17.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
false
zhangxy-2019
null
zhangxy-2019/cunlp-gpt2-dialog
4
null
transformers
19,008
Entry not found
zharry29/intent_fb-en_wh_id_rl
d581213820d29d589b58fe66013f956ee195c1ea
2021-05-20T23:33:07.000Z
[ "pytorch", "jax", "roberta", "multiple-choice", "transformers" ]
multiple-choice
false
zharry29
null
zharry29/intent_fb-en_wh_id_rl
4
null
transformers
19,009
Entry not found
zharry29/step_benchmark_gpt
a44fc69555e7bbf0ad7bdb72e114c4563b6fc9c2
2021-05-23T14:09:43.000Z
[ "pytorch", "gpt2", "transformers" ]
null
false
zharry29
null
zharry29/step_benchmark_gpt
4
null
transformers
19,010
Entry not found
zhc/distilbert-base-uncased-finetuned-mrpc-test
3418215240fc657af3356ae0e95c94bcb11c51f9
2021-09-11T04:10:39.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
zhc
null
zhc/distilbert-base-uncased-finetuned-mrpc-test
4
null
transformers
19,011
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-mrpc results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.7034313725490197 - name: F1 type: f1 value: 0.8207407407407408 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mrpc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5708 - Accuracy: 0.7034 - F1: 0.8207 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 58 | 0.5708 | 0.7034 | 0.8207 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
zhuqing/roberta-base-uncased-netmums-classification-intersection
5521cfb5028db4d0ba9374c7404e24de3f998588
2021-08-23T14:36:13.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
zhuqing
null
zhuqing/roberta-base-uncased-netmums-classification-intersection
4
null
transformers
19,012
Entry not found
zitterbewegung/DialoGPT-medium-ja
88b65cfedb4458ac3d8cb58ce7122ab320cbec39
2021-05-23T14:11:28.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
false
zitterbewegung
null
zitterbewegung/DialoGPT-medium-ja
4
null
transformers
19,013
Entry not found
zwang199/autonlp-traffic-nlp-451311592
1d419c5fcdb99195071fd48c2aa0797e6c31c2c5
2021-12-27T18:31:57.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:zwang199/autonlp-data-traffic-nlp", "transformers", "autonlp", "co2_eq_emissions" ]
text-classification
false
zwang199
null
zwang199/autonlp-traffic-nlp-451311592
4
null
transformers
19,014
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - zwang199/autonlp-data-traffic-nlp co2_eq_emissions: 1.8697144296865242 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 451311592 - CO2 Emissions (in grams): 1.8697144296865242 ## Validation Metrics - Loss: 0.4544260799884796 - Accuracy: 0.8042452830188679 - Precision: 0.8331288343558282 - Recall: 0.8573232323232324 - AUC: 0.8759811658249159 - F1: 0.8450528935905414 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/zwang199/autonlp-traffic-nlp-451311592 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("zwang199/autonlp-traffic-nlp-451311592", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("zwang199/autonlp-traffic-nlp-451311592", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
zwang199/autonlp-traffic_nlp_binary-537215209
d86e0d5a1b8398c0aca0d39f6ec22322d712bb73
2022-01-28T19:34:25.000Z
[ "pytorch", "roberta", "text-classification", "en", "dataset:zwang199/autonlp-data-traffic_nlp_binary", "transformers", "autonlp", "co2_eq_emissions" ]
text-classification
false
zwang199
null
zwang199/autonlp-traffic_nlp_binary-537215209
4
null
transformers
19,015
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - zwang199/autonlp-data-traffic_nlp_binary co2_eq_emissions: 1.171798205242445 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 537215209 - CO2 Emissions (in grams): 1.171798205242445 ## Validation Metrics - Loss: 0.3879534602165222 - Accuracy: 0.8597449908925319 - Precision: 0.8318042813455657 - Recall: 0.9251700680272109 - AUC: 0.9230158730158731 - F1: 0.8760064412238325 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/zwang199/autonlp-traffic_nlp_binary-537215209 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("zwang199/autonlp-traffic_nlp_binary-537215209", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("zwang199/autonlp-traffic_nlp_binary-537215209", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
zyl1024/bert-base-cased-finetuned-qqp
6799e7ed1b9d231f8fc685ce203f4971e2b390dd
2022-03-12T17:08:41.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
zyl1024
null
zyl1024/bert-base-cased-finetuned-qqp
4
null
transformers
19,016
Entry not found
wietsedv/xlm-roberta-base-ft-udpos28-be
f1ebda2082a828dcf185c1e585122fa9b3b5f0c0
2022-02-25T09:58:04.000Z
[ "pytorch", "xlm-roberta", "token-classification", "be", "dataset:universal_dependencies", "transformers", "part-of-speech", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
wietsedv
null
wietsedv/xlm-roberta-base-ft-udpos28-be
4
null
transformers
19,017
--- language: - be license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-be results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 77.5 - type: accuracy name: Dutch Test accuracy value: 80.7 - type: accuracy name: German Test accuracy value: 79.4 - type: accuracy name: Italian Test accuracy value: 80.1 - type: accuracy name: French Test accuracy value: 81.2 - type: accuracy name: Spanish Test accuracy value: 83.6 - type: accuracy name: Russian Test accuracy value: 95.3 - type: accuracy name: Swedish Test accuracy value: 85.9 - type: accuracy name: Norwegian Test accuracy value: 80.0 - type: accuracy name: Danish Test accuracy value: 84.3 - type: accuracy name: Low Saxon Test accuracy value: 25.5 - type: accuracy name: Akkadian Test accuracy value: 8.2 - type: accuracy name: Armenian Test accuracy value: 87.4 - type: accuracy name: Welsh Test accuracy value: 64.2 - type: accuracy name: Old East Slavic Test accuracy value: 79.2 - type: accuracy name: Albanian Test accuracy value: 76.2 - type: accuracy name: Slovenian Test accuracy value: 80.1 - type: accuracy name: Guajajara Test accuracy value: 14.3 - type: accuracy name: Kurmanji Test accuracy value: 75.0 - type: accuracy name: Turkish Test accuracy value: 73.0 - type: accuracy name: Finnish Test accuracy value: 83.6 - type: accuracy name: Indonesian Test accuracy value: 75.2 - type: accuracy name: Ukrainian Test accuracy value: 94.4 - type: accuracy name: Polish Test accuracy value: 88.9 - type: accuracy name: Portuguese Test accuracy value: 83.0 - type: accuracy name: Kazakh Test accuracy value: 81.1 - type: accuracy name: Latin Test accuracy value: 75.6 - type: accuracy name: Old French Test accuracy value: 27.0 - type: accuracy name: Buryat Test accuracy value: 61.1 - type: accuracy name: Kaapor Test accuracy value: 2.9 - type: accuracy name: Korean Test accuracy value: 61.8 - type: accuracy name: Estonian Test accuracy value: 83.0 - type: accuracy name: Croatian Test accuracy value: 90.4 - type: accuracy name: Gothic Test accuracy value: 1.7 - type: accuracy name: Swiss German Test accuracy value: 31.4 - type: accuracy name: Assyrian Test accuracy value: 14.6 - type: accuracy name: North Sami Test accuracy value: 19.4 - type: accuracy name: Naija Test accuracy value: 13.0 - type: accuracy name: Latvian Test accuracy value: 89.5 - type: accuracy name: Chinese Test accuracy value: 52.7 - type: accuracy name: Tagalog Test accuracy value: 70.5 - type: accuracy name: Bambara Test accuracy value: 15.9 - type: accuracy name: Lithuanian Test accuracy value: 89.9 - type: accuracy name: Galician Test accuracy value: 84.0 - type: accuracy name: Vietnamese Test accuracy value: 63.9 - type: accuracy name: Greek Test accuracy value: 79.1 - type: accuracy name: Catalan Test accuracy value: 80.5 - type: accuracy name: Czech Test accuracy value: 88.3 - type: accuracy name: Erzya Test accuracy value: 52.7 - type: accuracy name: Bhojpuri Test accuracy value: 51.6 - type: accuracy name: Thai Test accuracy value: 63.4 - type: accuracy name: Marathi Test accuracy value: 85.3 - type: accuracy name: Basque Test accuracy value: 74.9 - type: accuracy name: Slovak Test accuracy value: 88.6 - type: accuracy name: Kiche Test accuracy value: 20.7 - type: accuracy name: Yoruba Test accuracy value: 16.2 - type: accuracy name: Warlpiri Test accuracy value: 20.6 - type: accuracy name: Tamil Test accuracy value: 85.4 - type: accuracy name: Maltese Test accuracy value: 12.1 - type: accuracy name: Ancient Greek Test accuracy value: 66.9 - type: accuracy name: Icelandic Test accuracy value: 81.4 - type: accuracy name: Mbya Guarani Test accuracy value: 22.8 - type: accuracy name: Urdu Test accuracy value: 67.2 - type: accuracy name: Romanian Test accuracy value: 81.7 - type: accuracy name: Persian Test accuracy value: 75.2 - type: accuracy name: Apurina Test accuracy value: 22.7 - type: accuracy name: Japanese Test accuracy value: 38.7 - type: accuracy name: Hungarian Test accuracy value: 76.2 - type: accuracy name: Hindi Test accuracy value: 73.1 - type: accuracy name: Classical Chinese Test accuracy value: 28.0 - type: accuracy name: Komi Permyak Test accuracy value: 49.0 - type: accuracy name: Faroese Test accuracy value: 72.2 - type: accuracy name: Sanskrit Test accuracy value: 9.1 - type: accuracy name: Livvi Test accuracy value: 54.0 - type: accuracy name: Arabic Test accuracy value: 82.8 - type: accuracy name: Wolof Test accuracy value: 13.6 - type: accuracy name: Bulgarian Test accuracy value: 91.6 - type: accuracy name: Akuntsu Test accuracy value: 11.1 - type: accuracy name: Makurap Test accuracy value: 1.4 - type: accuracy name: Kangri Test accuracy value: 51.4 - type: accuracy name: Breton Test accuracy value: 52.2 - type: accuracy name: Telugu Test accuracy value: 83.6 - type: accuracy name: Cantonese Test accuracy value: 51.7 - type: accuracy name: Old Church Slavonic Test accuracy value: 51.7 - type: accuracy name: Karelian Test accuracy value: 67.1 - type: accuracy name: Upper Sorbian Test accuracy value: 67.0 - type: accuracy name: South Levantine Arabic Test accuracy value: 67.4 - type: accuracy name: Komi Zyrian Test accuracy value: 45.8 - type: accuracy name: Irish Test accuracy value: 59.2 - type: accuracy name: Nayini Test accuracy value: 43.6 - type: accuracy name: Munduruku Test accuracy value: 7.6 - type: accuracy name: Manx Test accuracy value: 16.9 - type: accuracy name: Skolt Sami Test accuracy value: 25.6 - type: accuracy name: Afrikaans Test accuracy value: 76.7 - type: accuracy name: Old Turkish Test accuracy value: 37.1 - type: accuracy name: Tupinamba Test accuracy value: 18.3 - type: accuracy name: Belarusian Test accuracy value: 98.3 - type: accuracy name: Serbian Test accuracy value: 91.0 - type: accuracy name: Moksha Test accuracy value: 50.1 - type: accuracy name: Western Armenian Test accuracy value: 78.4 - type: accuracy name: Scottish Gaelic Test accuracy value: 48.6 - type: accuracy name: Khunsari Test accuracy value: 44.6 - type: accuracy name: Hebrew Test accuracy value: 89.6 - type: accuracy name: Uyghur Test accuracy value: 75.8 - type: accuracy name: Chukchi Test accuracy value: 39.4 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Belarusian This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-be") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-be") ```
wietsedv/xlm-roberta-base-ft-udpos28-bg
8ec1399237ffadeb6a98e191612025a6bfd71fe5
2022-02-25T09:58:06.000Z
[ "pytorch", "xlm-roberta", "token-classification", "bg", "dataset:universal_dependencies", "transformers", "part-of-speech", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
wietsedv
null
wietsedv/xlm-roberta-base-ft-udpos28-bg
4
null
transformers
19,018
--- language: - bg license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-bg results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 85.0 - type: accuracy name: Dutch Test accuracy value: 81.6 - type: accuracy name: German Test accuracy value: 82.6 - type: accuracy name: Italian Test accuracy value: 82.5 - type: accuracy name: French Test accuracy value: 83.1 - type: accuracy name: Spanish Test accuracy value: 85.5 - type: accuracy name: Russian Test accuracy value: 92.7 - type: accuracy name: Swedish Test accuracy value: 89.5 - type: accuracy name: Norwegian Test accuracy value: 84.5 - type: accuracy name: Danish Test accuracy value: 88.4 - type: accuracy name: Low Saxon Test accuracy value: 56.5 - type: accuracy name: Akkadian Test accuracy value: 40.5 - type: accuracy name: Armenian Test accuracy value: 86.6 - type: accuracy name: Welsh Test accuracy value: 66.7 - type: accuracy name: Old East Slavic Test accuracy value: 76.7 - type: accuracy name: Albanian Test accuracy value: 78.4 - type: accuracy name: Slovenian Test accuracy value: 84.0 - type: accuracy name: Guajajara Test accuracy value: 36.2 - type: accuracy name: Kurmanji Test accuracy value: 78.5 - type: accuracy name: Turkish Test accuracy value: 77.6 - type: accuracy name: Finnish Test accuracy value: 84.5 - type: accuracy name: Indonesian Test accuracy value: 83.1 - type: accuracy name: Ukrainian Test accuracy value: 93.4 - type: accuracy name: Polish Test accuracy value: 91.8 - type: accuracy name: Portuguese Test accuracy value: 85.8 - type: accuracy name: Kazakh Test accuracy value: 80.9 - type: accuracy name: Latin Test accuracy value: 79.6 - type: accuracy name: Old French Test accuracy value: 60.9 - type: accuracy name: Buryat Test accuracy value: 61.9 - type: accuracy name: Kaapor Test accuracy value: 24.6 - type: accuracy name: Korean Test accuracy value: 61.0 - type: accuracy name: Estonian Test accuracy value: 86.8 - type: accuracy name: Croatian Test accuracy value: 92.8 - type: accuracy name: Gothic Test accuracy value: 28.7 - type: accuracy name: Swiss German Test accuracy value: 49.2 - type: accuracy name: Assyrian Test accuracy value: 16.1 - type: accuracy name: North Sami Test accuracy value: 46.2 - type: accuracy name: Naija Test accuracy value: 43.1 - type: accuracy name: Latvian Test accuracy value: 88.6 - type: accuracy name: Chinese Test accuracy value: 37.9 - type: accuracy name: Tagalog Test accuracy value: 81.6 - type: accuracy name: Bambara Test accuracy value: 30.0 - type: accuracy name: Lithuanian Test accuracy value: 85.7 - type: accuracy name: Galician Test accuracy value: 84.1 - type: accuracy name: Vietnamese Test accuracy value: 65.9 - type: accuracy name: Greek Test accuracy value: 79.9 - type: accuracy name: Catalan Test accuracy value: 83.3 - type: accuracy name: Czech Test accuracy value: 92.3 - type: accuracy name: Erzya Test accuracy value: 53.5 - type: accuracy name: Bhojpuri Test accuracy value: 56.5 - type: accuracy name: Thai Test accuracy value: 57.4 - type: accuracy name: Marathi Test accuracy value: 85.9 - type: accuracy name: Basque Test accuracy value: 75.8 - type: accuracy name: Slovak Test accuracy value: 91.3 - type: accuracy name: Kiche Test accuracy value: 42.5 - type: accuracy name: Yoruba Test accuracy value: 31.4 - type: accuracy name: Warlpiri Test accuracy value: 40.1 - type: accuracy name: Tamil Test accuracy value: 83.3 - type: accuracy name: Maltese Test accuracy value: 34.0 - type: accuracy name: Ancient Greek Test accuracy value: 61.9 - type: accuracy name: Icelandic Test accuracy value: 82.8 - type: accuracy name: Mbya Guarani Test accuracy value: 34.4 - type: accuracy name: Urdu Test accuracy value: 72.5 - type: accuracy name: Romanian Test accuracy value: 84.0 - type: accuracy name: Persian Test accuracy value: 77.7 - type: accuracy name: Apurina Test accuracy value: 48.2 - type: accuracy name: Japanese Test accuracy value: 27.8 - type: accuracy name: Hungarian Test accuracy value: 76.9 - type: accuracy name: Hindi Test accuracy value: 75.0 - type: accuracy name: Classical Chinese Test accuracy value: 31.2 - type: accuracy name: Komi Permyak Test accuracy value: 52.2 - type: accuracy name: Faroese Test accuracy value: 78.8 - type: accuracy name: Sanskrit Test accuracy value: 39.9 - type: accuracy name: Livvi Test accuracy value: 67.5 - type: accuracy name: Arabic Test accuracy value: 84.4 - type: accuracy name: Wolof Test accuracy value: 39.8 - type: accuracy name: Bulgarian Test accuracy value: 99.3 - type: accuracy name: Akuntsu Test accuracy value: 39.6 - type: accuracy name: Makurap Test accuracy value: 28.1 - type: accuracy name: Kangri Test accuracy value: 50.5 - type: accuracy name: Breton Test accuracy value: 60.8 - type: accuracy name: Telugu Test accuracy value: 82.7 - type: accuracy name: Cantonese Test accuracy value: 42.1 - type: accuracy name: Old Church Slavonic Test accuracy value: 47.9 - type: accuracy name: Karelian Test accuracy value: 74.0 - type: accuracy name: Upper Sorbian Test accuracy value: 79.1 - type: accuracy name: South Levantine Arabic Test accuracy value: 69.7 - type: accuracy name: Komi Zyrian Test accuracy value: 45.7 - type: accuracy name: Irish Test accuracy value: 65.6 - type: accuracy name: Nayini Test accuracy value: 44.9 - type: accuracy name: Munduruku Test accuracy value: 28.5 - type: accuracy name: Manx Test accuracy value: 40.1 - type: accuracy name: Skolt Sami Test accuracy value: 41.3 - type: accuracy name: Afrikaans Test accuracy value: 81.5 - type: accuracy name: Old Turkish Test accuracy value: 37.1 - type: accuracy name: Tupinamba Test accuracy value: 44.6 - type: accuracy name: Belarusian Test accuracy value: 92.3 - type: accuracy name: Serbian Test accuracy value: 92.1 - type: accuracy name: Moksha Test accuracy value: 49.2 - type: accuracy name: Western Armenian Test accuracy value: 83.2 - type: accuracy name: Scottish Gaelic Test accuracy value: 58.2 - type: accuracy name: Khunsari Test accuracy value: 47.3 - type: accuracy name: Hebrew Test accuracy value: 88.5 - type: accuracy name: Uyghur Test accuracy value: 75.8 - type: accuracy name: Chukchi Test accuracy value: 39.4 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Bulgarian This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-bg") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-bg") ```
wietsedv/xlm-roberta-base-ft-udpos28-cu
6d280510ae537cdf7c2bed696a3506acc20319aa
2022-02-25T09:58:11.000Z
[ "pytorch", "xlm-roberta", "token-classification", "cu", "dataset:universal_dependencies", "transformers", "part-of-speech", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
wietsedv
null
wietsedv/xlm-roberta-base-ft-udpos28-cu
4
null
transformers
19,019
--- language: - cu license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-cu results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 60.1 - type: accuracy name: Dutch Test accuracy value: 58.7 - type: accuracy name: German Test accuracy value: 59.9 - type: accuracy name: Italian Test accuracy value: 55.8 - type: accuracy name: French Test accuracy value: 59.5 - type: accuracy name: Spanish Test accuracy value: 58.4 - type: accuracy name: Russian Test accuracy value: 63.2 - type: accuracy name: Swedish Test accuracy value: 64.5 - type: accuracy name: Norwegian Test accuracy value: 58.0 - type: accuracy name: Danish Test accuracy value: 59.9 - type: accuracy name: Low Saxon Test accuracy value: 38.3 - type: accuracy name: Akkadian Test accuracy value: 30.4 - type: accuracy name: Armenian Test accuracy value: 57.3 - type: accuracy name: Welsh Test accuracy value: 50.8 - type: accuracy name: Old East Slavic Test accuracy value: 72.5 - type: accuracy name: Albanian Test accuracy value: 62.5 - type: accuracy name: Slovenian Test accuracy value: 60.4 - type: accuracy name: Guajajara Test accuracy value: 20.5 - type: accuracy name: Kurmanji Test accuracy value: 57.1 - type: accuracy name: Turkish Test accuracy value: 53.3 - type: accuracy name: Finnish Test accuracy value: 61.2 - type: accuracy name: Indonesian Test accuracy value: 56.0 - type: accuracy name: Ukrainian Test accuracy value: 62.2 - type: accuracy name: Polish Test accuracy value: 63.2 - type: accuracy name: Portuguese Test accuracy value: 57.8 - type: accuracy name: Kazakh Test accuracy value: 55.1 - type: accuracy name: Latin Test accuracy value: 63.9 - type: accuracy name: Old French Test accuracy value: 55.9 - type: accuracy name: Buryat Test accuracy value: 36.1 - type: accuracy name: Kaapor Test accuracy value: 26.2 - type: accuracy name: Korean Test accuracy value: 46.0 - type: accuracy name: Estonian Test accuracy value: 62.0 - type: accuracy name: Croatian Test accuracy value: 64.4 - type: accuracy name: Gothic Test accuracy value: 32.4 - type: accuracy name: Swiss German Test accuracy value: 41.3 - type: accuracy name: Assyrian Test accuracy value: 15.0 - type: accuracy name: North Sami Test accuracy value: 22.6 - type: accuracy name: Naija Test accuracy value: 38.0 - type: accuracy name: Latvian Test accuracy value: 61.9 - type: accuracy name: Chinese Test accuracy value: 42.8 - type: accuracy name: Tagalog Test accuracy value: 57.1 - type: accuracy name: Bambara Test accuracy value: 13.4 - type: accuracy name: Lithuanian Test accuracy value: 62.7 - type: accuracy name: Galician Test accuracy value: 59.7 - type: accuracy name: Vietnamese Test accuracy value: 50.0 - type: accuracy name: Greek Test accuracy value: 57.0 - type: accuracy name: Catalan Test accuracy value: 57.0 - type: accuracy name: Czech Test accuracy value: 63.9 - type: accuracy name: Erzya Test accuracy value: 20.4 - type: accuracy name: Bhojpuri Test accuracy value: 36.7 - type: accuracy name: Thai Test accuracy value: 59.0 - type: accuracy name: Marathi Test accuracy value: 46.0 - type: accuracy name: Basque Test accuracy value: 55.1 - type: accuracy name: Slovak Test accuracy value: 64.2 - type: accuracy name: Kiche Test accuracy value: 15.0 - type: accuracy name: Yoruba Test accuracy value: 11.1 - type: accuracy name: Warlpiri Test accuracy value: 25.9 - type: accuracy name: Tamil Test accuracy value: 60.3 - type: accuracy name: Maltese Test accuracy value: 21.7 - type: accuracy name: Ancient Greek Test accuracy value: 59.1 - type: accuracy name: Icelandic Test accuracy value: 62.5 - type: accuracy name: Mbya Guarani Test accuracy value: 13.1 - type: accuracy name: Urdu Test accuracy value: 49.0 - type: accuracy name: Romanian Test accuracy value: 63.4 - type: accuracy name: Persian Test accuracy value: 60.8 - type: accuracy name: Apurina Test accuracy value: 14.1 - type: accuracy name: Japanese Test accuracy value: 34.1 - type: accuracy name: Hungarian Test accuracy value: 53.0 - type: accuracy name: Hindi Test accuracy value: 51.5 - type: accuracy name: Classical Chinese Test accuracy value: 40.4 - type: accuracy name: Komi Permyak Test accuracy value: 25.7 - type: accuracy name: Faroese Test accuracy value: 56.9 - type: accuracy name: Sanskrit Test accuracy value: 37.6 - type: accuracy name: Livvi Test accuracy value: 41.5 - type: accuracy name: Arabic Test accuracy value: 65.4 - type: accuracy name: Wolof Test accuracy value: 22.1 - type: accuracy name: Bulgarian Test accuracy value: 65.4 - type: accuracy name: Akuntsu Test accuracy value: 16.2 - type: accuracy name: Makurap Test accuracy value: 6.8 - type: accuracy name: Kangri Test accuracy value: 33.9 - type: accuracy name: Breton Test accuracy value: 43.5 - type: accuracy name: Telugu Test accuracy value: 52.1 - type: accuracy name: Cantonese Test accuracy value: 37.9 - type: accuracy name: Old Church Slavonic Test accuracy value: 94.9 - type: accuracy name: Karelian Test accuracy value: 49.1 - type: accuracy name: Upper Sorbian Test accuracy value: 51.8 - type: accuracy name: South Levantine Arabic Test accuracy value: 51.5 - type: accuracy name: Komi Zyrian Test accuracy value: 21.8 - type: accuracy name: Irish Test accuracy value: 41.6 - type: accuracy name: Nayini Test accuracy value: 39.7 - type: accuracy name: Munduruku Test accuracy value: 11.8 - type: accuracy name: Manx Test accuracy value: 19.8 - type: accuracy name: Skolt Sami Test accuracy value: 20.7 - type: accuracy name: Afrikaans Test accuracy value: 62.4 - type: accuracy name: Old Turkish Test accuracy value: 4.5 - type: accuracy name: Tupinamba Test accuracy value: 18.2 - type: accuracy name: Belarusian Test accuracy value: 66.3 - type: accuracy name: Serbian Test accuracy value: 65.4 - type: accuracy name: Moksha Test accuracy value: 23.3 - type: accuracy name: Western Armenian Test accuracy value: 58.5 - type: accuracy name: Scottish Gaelic Test accuracy value: 41.3 - type: accuracy name: Khunsari Test accuracy value: 35.1 - type: accuracy name: Hebrew Test accuracy value: 58.3 - type: accuracy name: Uyghur Test accuracy value: 52.8 - type: accuracy name: Chukchi Test accuracy value: 12.5 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Old Church Slavonic This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-cu") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-cu") ```
DoyyingFace/bert-asian-hate-tweets-self-unclean
362e4f4a31d3e877f383139a8d0100acf8cc4f5d
2022-02-24T10:25:37.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
DoyyingFace
null
DoyyingFace/bert-asian-hate-tweets-self-unclean
4
null
transformers
19,020
Entry not found
DoyyingFace/bert-asian-hate-tweets-asonam-unclean
4f16941e2706fb883e4010b5495fe2013de43135
2022-02-24T13:04:35.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
DoyyingFace
null
DoyyingFace/bert-asian-hate-tweets-asonam-unclean
4
null
transformers
19,021
Entry not found
DoyyingFace/bert-asian-hate-tweets-asonam-clean
355503a28854563e5b6f42ef991becce2d8839ea
2022-02-24T13:06:48.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
DoyyingFace
null
DoyyingFace/bert-asian-hate-tweets-asonam-clean
4
null
transformers
19,022
Entry not found
DoyyingFace/bert-asian-hate-tweets-concat-unclean
7bdc74994a376ba4ed9b7312dde5e683554f2ded
2022-02-24T13:46:22.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
DoyyingFace
null
DoyyingFace/bert-asian-hate-tweets-concat-unclean
4
null
transformers
19,023
Entry not found
DoyyingFace/bert-asian-hate-tweets-concat-clean
b6568690192aff74030344c6273f510ab07c8b7d
2022-02-24T13:57:51.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
DoyyingFace
null
DoyyingFace/bert-asian-hate-tweets-concat-clean
4
null
transformers
19,024
Entry not found
DoyyingFace/bert-asian-hate-tweets-concat-unclean-with-clean-valid
84ee076969b3e72dd14f649603f1e2eaf7853582
2022-02-24T15:14:37.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
DoyyingFace
null
DoyyingFace/bert-asian-hate-tweets-concat-unclean-with-clean-valid
4
null
transformers
19,025
Entry not found
DoyyingFace/bert-asian-hate-tweets-self-clean-with-unclean-valid
52a34f154629b509c2d6e040663ff594fafdf593
2022-02-24T15:42:38.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
DoyyingFace
null
DoyyingFace/bert-asian-hate-tweets-self-clean-with-unclean-valid
4
null
transformers
19,026
Entry not found
DoyyingFace/bert-asian-hate-tweets-asian-clean-with-unclean-valid
96218d3451941b55888d22f7c6b32bd703ef4e04
2022-02-24T16:06:36.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
DoyyingFace
null
DoyyingFace/bert-asian-hate-tweets-asian-clean-with-unclean-valid
4
null
transformers
19,027
Entry not found
DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-8
b27c52182a494bc921dc65ecce6e6cf2c2d808bc
2022-02-24T16:44:13.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
DoyyingFace
null
DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-8
4
null
transformers
19,028
Entry not found
anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-0
b360fd97b8128187c6d8f33c50a3459f981db370
2022-02-24T21:54:26.000Z
[ "pytorch", "bert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
anas-awadalla
null
anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-0
4
null
transformers
19,029
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-base-uncased-few-shot-k-32-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-few-shot-k-32-finetuned-squad-seed-0 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-4
92658827d041acc3e40ee56e80eebc89a807f8f7
2022-02-25T01:28:25.000Z
[ "pytorch", "bert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
anas-awadalla
null
anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-4
4
null
transformers
19,030
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-base-uncased-few-shot-k-128-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-few-shot-k-128-finetuned-squad-seed-4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-4
c1b96710a7ea66c388f901df12378614f6841bff
2022-02-25T03:09:00.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
DoyyingFace
null
DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-4
4
null
transformers
19,031
Entry not found
DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-12
edad78f72b8800057a691cc3d8456e1fe3771f0e
2022-02-25T03:21:57.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
DoyyingFace
null
DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-12
4
null
transformers
19,032
Entry not found
anantoj/wav2vec2-xls-r-300m-adult-child-cls
044719fc86338eda09c2eb107a629a7c46e5fdfb
2022-02-25T07:47:57.000Z
[ "pytorch", "tensorboard", "wav2vec2", "audio-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
audio-classification
false
anantoj
null
anantoj/wav2vec2-xls-r-300m-adult-child-cls
4
null
transformers
19,033
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: wav2vec2-xls-r-300m-adult-child-cls results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-adult-child-cls This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1770 - Accuracy: 0.9404 - F1: 0.9440 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.25 | 1.0 | 383 | 0.2516 | 0.9077 | 0.9106 | | 0.2052 | 2.0 | 766 | 0.2138 | 0.9321 | 0.9353 | | 0.1901 | 3.0 | 1149 | 0.1770 | 0.9404 | 0.9440 | | 0.2255 | 4.0 | 1532 | 0.1794 | 0.9404 | 0.9440 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-75
f8743b1e0e5593966d84fbe1544a1cbd15e07a8a
2022-02-25T04:40:13.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
DoyyingFace
null
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-75
4
null
transformers
19,034
Entry not found
BigSalmon/GPTNeo350MInformalToFormalLincoln4
09b11fd2f707d4901fa06788c80d0e652c5e7355
2022-02-25T05:04:06.000Z
[ "pytorch", "gpt_neo", "text-generation", "transformers" ]
text-generation
false
BigSalmon
null
BigSalmon/GPTNeo350MInformalToFormalLincoln4
4
null
transformers
19,035
Trained on this model: https://huggingface.co/xhyi/PT_GPTNEO350_ATG/tree/main ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln3") model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln3") ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel. Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle. Essay Intro ( ```
Davlan/xlm-roberta-base-finetuned-lingala
6350a63ff26b3ef1aa45298a4dba208622f206b3
2022-02-25T15:36:10.000Z
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
Davlan
null
Davlan/xlm-roberta-base-finetuned-lingala
4
1
transformers
19,036
--- license: apache-2.0 ---
anas-awadalla/spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-0
c2b6e21b8e82c4502597508b592bdfdd04dbec0a
2022-02-25T19:16:24.000Z
[ "pytorch", "bert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
question-answering
false
anas-awadalla
null
anas-awadalla/spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-0
4
null
transformers
19,037
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-0 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-2
7c7467da0544daad7916e4e58ae1ea39ef190059
2022-02-25T19:29:02.000Z
[ "pytorch", "bert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
question-answering
false
anas-awadalla
null
anas-awadalla/spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-2
4
null
transformers
19,038
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-2 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-32-finetuned-squad-seed-6
6c7a6de9a74e0dcf19062804017f0708dd2d4c4c
2022-02-25T21:27:44.000Z
[ "pytorch", "bert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
question-answering
false
anas-awadalla
null
anas-awadalla/spanbert-base-cased-few-shot-k-32-finetuned-squad-seed-6
4
null
transformers
19,039
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-32-finetuned-squad-seed-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-32-finetuned-squad-seed-6 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-2
0e4c5a3e9eb67c0c053c1429575c7aa1cdf90465
2022-02-25T22:28:13.000Z
[ "pytorch", "bert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
question-answering
false
anas-awadalla
null
anas-awadalla/spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-2
4
null
transformers
19,040
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-2 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
DoyyingFace/bert-asian-hate-tweets-self-unclean-small
fea2648b590b9b10cc71c5dcdeea3b74ccd78a4a
2022-02-26T02:45:51.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
DoyyingFace
null
DoyyingFace/bert-asian-hate-tweets-self-unclean-small
4
null
transformers
19,041
Entry not found
DoyyingFace/bert-asian-hate-tweets-self-clean-small
c457e34a8210bcbc58f55d1a3a4b1f2fbf78c972
2022-02-26T02:51:23.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
DoyyingFace
null
DoyyingFace/bert-asian-hate-tweets-self-clean-small
4
null
transformers
19,042
Entry not found
ali2066/finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45
1fd847361186bcf6566e978ff2f795122484edd5
2022-02-26T03:03:20.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45
4
null
transformers
19,043
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4345 - Accuracy: 0.8321 - F1: 0.8904 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3922 | 0.8061 | 0.8747 | | No log | 2.0 | 390 | 0.3764 | 0.8171 | 0.8837 | | 0.4074 | 3.0 | 585 | 0.3873 | 0.8220 | 0.8843 | | 0.4074 | 4.0 | 780 | 0.4361 | 0.8232 | 0.8854 | | 0.4074 | 5.0 | 975 | 0.4555 | 0.8159 | 0.8793 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr1_2e-05_all_26_02_2022-04_03_26
535177c027ef3ac14265accbe29dde63ca82ec7c
2022-02-26T03:08:55.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr1_2e-05_all_26_02_2022-04_03_26
4
null
transformers
19,044
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr1_2e-05_all_26_02_2022-04_03_26 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_2e-05_all_26_02_2022-04_03_26 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr2_2e-05_all_26_02_2022-04_09_01
e74e66159536b4e4ec8420b4854294831234f3f0
2022-02-26T03:14:31.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr2_2e-05_all_26_02_2022-04_09_01
4
null
transformers
19,045
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr2_2e-05_all_26_02_2022-04_09_01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_2e-05_all_26_02_2022-04_09_01 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
DoyyingFace/bert-asian-hate-tweets-self-clean-small-epoch6
a39b0fe7457cc58ae708a90c2fb2e73a14ab9ffd
2022-02-26T03:17:53.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
DoyyingFace
null
DoyyingFace/bert-asian-hate-tweets-self-clean-small-epoch6
4
null
transformers
19,046
Entry not found
ali2066/finetuned_sentence_itr5_2e-05_all_26_02_2022-04_25_39
2b758f4dffc42a8e85ce343efe2a0e792c230523
2022-02-26T03:31:07.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr5_2e-05_all_26_02_2022-04_25_39
4
null
transformers
19,047
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr5_2e-05_all_26_02_2022-04_25_39 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr5_2e-05_all_26_02_2022-04_25_39 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr7_2e-05_all_26_02_2022-04_36_45
1592a27af206197bdc5b5d499b29c98831dfbdb3
2022-02-26T03:42:02.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr7_2e-05_all_26_02_2022-04_36_45
4
null
transformers
19,048
Entry not found
anas-awadalla/spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-0
db73dc8f2d43dc620696313c3caec6350f3f32b9
2022-02-26T04:19:12.000Z
[ "pytorch", "bert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
question-answering
false
anas-awadalla
null
anas-awadalla/spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-0
4
null
transformers
19,049
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-0 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-2
d65e31fd22ef135a3abec502cb45ed15d1fd85ee
2022-02-26T05:38:42.000Z
[ "pytorch", "bert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
question-answering
false
anas-awadalla
null
anas-awadalla/spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-2
4
null
transformers
19,050
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-2 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
swcrazyfan/Dekingify-T5-Large
191a5ca5c930a0c90fd93414e69bf186bf16b408
2022-03-06T09:44:13.000Z
[ "pytorch", "onnx", "t5", "text2text-generation", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
swcrazyfan
null
swcrazyfan/Dekingify-T5-Large
4
null
transformers
19,051
--- license: apache-2.0 ---
abhinema/distillgpt2
80cb469e58ce1bb5348782f9848242806e6ef92c
2022-02-27T04:08:32.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
abhinema
null
abhinema/distillgpt2
4
null
transformers
19,052
Entry not found
ali2066/finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22
8cc876c76f22401b13add392758ee969ee21f637
2022-02-27T16:38:53.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22
4
null
transformers
19,053
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4095 - Accuracy: 0.8263 - F1: 0.8865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 | | No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 | | 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 | | 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 | | 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58
be4762e02f28f2684e3c6f9d0c3910e2bb227cd2
2022-02-27T16:44:27.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58
4
null
transformers
19,054
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4095 - Accuracy: 0.8263 - F1: 0.8865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 | | No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 | | 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 | | 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 | | 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr3_2e-05_all_27_02_2022-17_44_32
d316e24b0f20c16fb36481516b2a8733e37acfd9
2022-02-27T16:50:01.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr3_2e-05_all_27_02_2022-17_44_32
4
null
transformers
19,055
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr3_2e-05_all_27_02_2022-17_44_32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr3_2e-05_all_27_02_2022-17_44_32 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4095 - Accuracy: 0.8263 - F1: 0.8865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 | | No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 | | 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 | | 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 | | 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05
7594be3f654037ec7bf182c0360fdea6a45503f2
2022-02-27T16:55:39.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05
4
null
transformers
19,056
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4095 - Accuracy: 0.8263 - F1: 0.8865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 | | No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 | | 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 | | 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 | | 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43
c32b0302ce6c724f503e2a4767635a1eed3f3791
2022-02-27T17:01:16.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43
4
null
transformers
19,057
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7600 - Accuracy: 0.8144 - F1: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 | | No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 | | 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 | | 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 | | 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr1_0.0002_all_27_02_2022-18_01_22
145fa9dca570360d48c5df4dc0515d1853aa6b90
2022-02-27T17:06:54.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr1_0.0002_all_27_02_2022-18_01_22
4
null
transformers
19,058
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr1_0.0002_all_27_02_2022-18_01_22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_0.0002_all_27_02_2022-18_01_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7600 - Accuracy: 0.8144 - F1: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 | | No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 | | 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 | | 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 | | 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr2_0.0002_all_27_02_2022-18_06_59
db8b2c75f10ebd6c0995fce07a4235412daa6381
2022-02-27T17:12:30.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr2_0.0002_all_27_02_2022-18_06_59
4
null
transformers
19,059
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr2_0.0002_all_27_02_2022-18_06_59 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_0.0002_all_27_02_2022-18_06_59 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7600 - Accuracy: 0.8144 - F1: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 | | No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 | | 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 | | 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 | | 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr3_0.0002_all_27_02_2022-18_12_34
31fb447e98d4ae7d2fdd1ce519c25fcde2c19d4a
2022-02-27T17:18:06.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr3_0.0002_all_27_02_2022-18_12_34
4
null
transformers
19,060
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr3_0.0002_all_27_02_2022-18_12_34 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr3_0.0002_all_27_02_2022-18_12_34 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7600 - Accuracy: 0.8144 - F1: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 | | No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 | | 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 | | 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 | | 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr4_0.0002_all_27_02_2022-18_18_11
7242432997977fc93643bce48bb5d4b02c9342f4
2022-02-27T17:23:43.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr4_0.0002_all_27_02_2022-18_18_11
4
null
transformers
19,061
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr4_0.0002_all_27_02_2022-18_18_11 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr4_0.0002_all_27_02_2022-18_18_11 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7600 - Accuracy: 0.8144 - F1: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 | | No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 | | 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 | | 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 | | 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_3e-05_all_27_02_2022-18_23_48
4c356b9e7e254263ad9e1f215e7dcee2dfbcad28
2022-02-27T17:29:20.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr0_3e-05_all_27_02_2022-18_23_48
4
null
transformers
19,062
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_3e-05_all_27_02_2022-18_23_48 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_3e-05_all_27_02_2022-18_23_48 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr1_3e-05_all_27_02_2022-18_29_24
d108f10aaf0afa62019cfca12d663050c22d4e37
2022-02-27T17:34:56.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr1_3e-05_all_27_02_2022-18_29_24
4
null
transformers
19,063
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr1_3e-05_all_27_02_2022-18_29_24 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_3e-05_all_27_02_2022-18_29_24 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr3_3e-05_all_27_02_2022-18_40_40
08d942efb808b5a683404a6e7eedade4c8f01ab6
2022-02-27T17:46:15.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr3_3e-05_all_27_02_2022-18_40_40
4
null
transformers
19,064
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr3_3e-05_all_27_02_2022-18_40_40 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr3_3e-05_all_27_02_2022-18_40_40 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19
896513e5117a34f76e8e36f1f206c7e123f1859e
2022-02-27T17:51:50.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19
4
null
transformers
19,065
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr1_2e-05_webDiscourse_27_02_2022-18_54_09
409170cb78170a72e924e31bb2b909b9331ee0e2
2022-02-27T17:56:27.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr1_2e-05_webDiscourse_27_02_2022-18_54_09
4
null
transformers
19,066
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr1_2e-05_webDiscourse_27_02_2022-18_54_09 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_2e-05_webDiscourse_27_02_2022-18_54_09 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6049 - Accuracy: 0.6926 - F1: 0.4160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 | | No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 | | No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 | | No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 | | No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32
5086b2d3898e7a3cc18da836f21e75a62ba006ce
2022-02-27T17:59:00.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32
4
null
transformers
19,067
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6049 - Accuracy: 0.6926 - F1: 0.4160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 | | No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 | | No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 | | No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 | | No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr4_2e-05_webDiscourse_27_02_2022-19_01_41
e494a496d83e5bd4da00240589a69a926c530f36
2022-02-27T18:02:06.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr4_2e-05_webDiscourse_27_02_2022-19_01_41
4
null
transformers
19,068
Entry not found
ali2066/finetuned_sentence_itr0_3e-05_all_27_02_2022-19_16_53
684c3aff18d7116cebc33f52f20b1ec9e38db0a9
2022-02-27T18:22:24.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr0_3e-05_all_27_02_2022-19_16_53
4
null
transformers
19,069
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_3e-05_all_27_02_2022-19_16_53 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_3e-05_all_27_02_2022-19_16_53 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3944 - Accuracy: 0.8279 - F1: 0.8901 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3946 | 0.8012 | 0.8743 | | No log | 2.0 | 390 | 0.3746 | 0.8329 | 0.8929 | | 0.3644 | 3.0 | 585 | 0.4288 | 0.8268 | 0.8849 | | 0.3644 | 4.0 | 780 | 0.5352 | 0.8232 | 0.8841 | | 0.3644 | 5.0 | 975 | 0.5768 | 0.8268 | 0.8864 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-19_22_29
4d5fb4df6c2e3d47f77a8fbc9c075a1153e9519a
2022-02-27T18:25:01.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-19_22_29
4
null
transformers
19,070
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-19_22_29 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-19_22_29 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5819 - Accuracy: 0.7058 - F1: 0.4267 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.6110 | 0.665 | 0.0 | | No log | 2.0 | 96 | 0.5706 | 0.685 | 0.2588 | | No log | 3.0 | 144 | 0.5484 | 0.725 | 0.5299 | | No log | 4.0 | 192 | 0.5585 | 0.71 | 0.4727 | | No log | 5.0 | 240 | 0.5616 | 0.725 | 0.5133 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_0.0002_webDiscourse_27_02_2022-19_25_06
1faaebfe4ac9422d00335662ae596643c357c5ba
2022-02-27T18:27:34.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr0_0.0002_webDiscourse_27_02_2022-19_25_06
4
null
transformers
19,071
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_0.0002_webDiscourse_27_02_2022-19_25_06 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_webDiscourse_27_02_2022-19_25_06 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5777 - Accuracy: 0.6794 - F1: 0.5010 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.6059 | 0.63 | 0.4932 | | No log | 2.0 | 96 | 0.6327 | 0.705 | 0.5630 | | No log | 3.0 | 144 | 0.7003 | 0.695 | 0.5197 | | No log | 4.0 | 192 | 0.9368 | 0.69 | 0.4655 | | No log | 5.0 | 240 | 1.1935 | 0.685 | 0.4425 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_3e-05_webDiscourse_27_02_2022-19_27_41
681862b8b6aed5a5b6345cddd92408e19a6c8d33
2022-02-27T18:30:17.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr0_3e-05_webDiscourse_27_02_2022-19_27_41
4
null
transformers
19,072
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_3e-05_webDiscourse_27_02_2022-19_27_41 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_3e-05_webDiscourse_27_02_2022-19_27_41 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6020 - Accuracy: 0.7032 - F1: 0.4851 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.5914 | 0.67 | 0.0294 | | No log | 2.0 | 96 | 0.5616 | 0.695 | 0.2824 | | No log | 3.0 | 144 | 0.5596 | 0.73 | 0.5909 | | No log | 4.0 | 192 | 0.6273 | 0.73 | 0.5 | | No log | 5.0 | 240 | 0.6370 | 0.71 | 0.5 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22
1bed5dd40b91ae2be36fe1d5c6298f9167424d6e
2022-02-27T18:33:05.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22
4
null
transformers
19,073
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3455 - Accuracy: 0.8609 - F1: 0.9156 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 81 | 0.4468 | 0.8235 | 0.8929 | | No log | 2.0 | 162 | 0.4497 | 0.8382 | 0.9 | | No log | 3.0 | 243 | 0.4861 | 0.8309 | 0.8940 | | No log | 4.0 | 324 | 0.5087 | 0.8235 | 0.8879 | | No log | 5.0 | 405 | 0.5228 | 0.8199 | 0.8858 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10
9cf721427043defce0015b74f7aa3d742b5e361c
2022-02-27T18:35:51.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10
4
null
transformers
19,074
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3358 - Accuracy: 0.8688 - F1: 0.9225 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 81 | 0.4116 | 0.8382 | 0.9027 | | No log | 2.0 | 162 | 0.4360 | 0.8382 | 0.8952 | | No log | 3.0 | 243 | 0.5719 | 0.8382 | 0.8995 | | No log | 4.0 | 324 | 0.7251 | 0.8493 | 0.9021 | | No log | 5.0 | 405 | 0.8384 | 0.8456 | 0.9019 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36
8e559a225d5edd4e4d0547a9ae4bc47be128a5f2
2022-02-27T18:46:16.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36
4
null
transformers
19,075
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0926 - Accuracy: 0.9772 - F1: 0.9883 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 104 | 0.0539 | 0.9885 | 0.9942 | | No log | 2.0 | 208 | 0.0282 | 0.9885 | 0.9942 | | No log | 3.0 | 312 | 0.0317 | 0.9914 | 0.9956 | | No log | 4.0 | 416 | 0.0462 | 0.9885 | 0.9942 | | 0.0409 | 5.0 | 520 | 0.0517 | 0.9885 | 0.9942 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_3e-05_editorials_27_02_2022-19_46_22
33598da335cd682927cb214aa3ed216a9fb9801a
2022-02-27T18:50:02.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr0_3e-05_editorials_27_02_2022-19_46_22
4
null
transformers
19,076
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_3e-05_editorials_27_02_2022-19_46_22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_3e-05_editorials_27_02_2022-19_46_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0890 - Accuracy: 0.9750 - F1: 0.9873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 104 | 0.0485 | 0.9885 | 0.9942 | | No log | 2.0 | 208 | 0.0558 | 0.9857 | 0.9927 | | No log | 3.0 | 312 | 0.0501 | 0.9828 | 0.9913 | | No log | 4.0 | 416 | 0.0593 | 0.9828 | 0.9913 | | 0.04 | 5.0 | 520 | 0.0653 | 0.9828 | 0.9913 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-22_25_09
d97e4fd19efab994e0f696296321860000bc24fc
2022-02-27T21:30:48.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-22_25_09
4
null
transformers
19,077
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_all_27_02_2022-22_25_09 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_27_02_2022-22_25_09 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4638 - Accuracy: 0.8247 - F1: 0.8867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4069 | 0.7976 | 0.875 | | No log | 2.0 | 390 | 0.4061 | 0.8134 | 0.8838 | | 0.4074 | 3.0 | 585 | 0.4075 | 0.8134 | 0.8798 | | 0.4074 | 4.0 | 780 | 0.4746 | 0.8256 | 0.8885 | | 0.4074 | 5.0 | 975 | 0.4881 | 0.8220 | 0.8845 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-22_30_53
7f6b7d07609c76434f881c6c9a7c9ec97f72148c
2022-02-27T21:36:21.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-22_30_53
4
null
transformers
19,078
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_0.0002_all_27_02_2022-22_30_53 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_all_27_02_2022-22_30_53 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3825 - Accuracy: 0.8144 - F1: 0.8833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3975 | 0.8122 | 0.8795 | | No log | 2.0 | 390 | 0.4376 | 0.8085 | 0.8673 | | 0.3169 | 3.0 | 585 | 0.5736 | 0.8171 | 0.8790 | | 0.3169 | 4.0 | 780 | 0.8178 | 0.8098 | 0.8754 | | 0.3169 | 5.0 | 975 | 0.9244 | 0.8073 | 0.8738 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_3e-05_all_27_02_2022-22_36_26
115df9b28d963f08158a8a9774c3065398a0b7af
2022-02-27T21:41:47.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr0_3e-05_all_27_02_2022-22_36_26
4
null
transformers
19,079
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_3e-05_all_27_02_2022-22_36_26 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_3e-05_all_27_02_2022-22_36_26 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6071 - Accuracy: 0.8337 - F1: 0.8922 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3920 | 0.7988 | 0.8624 | | No log | 2.0 | 390 | 0.3873 | 0.8171 | 0.8739 | | 0.3673 | 3.0 | 585 | 0.4354 | 0.8256 | 0.8835 | | 0.3673 | 4.0 | 780 | 0.5358 | 0.8293 | 0.8887 | | 0.3673 | 5.0 | 975 | 0.5616 | 0.8366 | 0.8923 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
eheitor/wav2vec2-base-xlsr53-ser_demo
854d6159a8ff16823fc80c2e6cb2c66ca42d524b
2022-03-01T00:11:06.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
eheitor
null
eheitor/wav2vec2-base-xlsr53-ser_demo
4
null
transformers
19,080
Entry not found
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-02_53_51
68d7230d791cf4126a12b48d4901688346410628
2022-03-01T02:20:45.000Z
[ "pytorch", "tensorboard", "roberta", "text-classification", "transformers", "generated_from_trainer", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-02_53_51
4
null
transformers
19,081
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_all_01_03_2022-02_53_51 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_01_03_2022-02_53_51 This model is a fine-tuned version of [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4563 - Accuracy: 0.8440 - F1: 0.8954 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4302 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3970 | 0.8220 | 0.8875 | | 0.3703 | 3.0 | 585 | 0.3972 | 0.8402 | 0.8934 | | 0.3703 | 4.0 | 780 | 0.4945 | 0.8390 | 0.8935 | | 0.3703 | 5.0 | 975 | 0.5354 | 0.8305 | 0.8898 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03
92823c31e4adef3e493d611534e2bcf0f5a7ad04
2022-03-01T04:37:52.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03
4
null
transformers
19,082
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4208 - Accuracy: 0.8283 - F1: 0.8915 - Precision: 0.8487 - Recall: 0.9389 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 390 | 0.4443 | 0.7768 | 0.8589 | 0.8072 | 0.9176 | | 0.4532 | 2.0 | 780 | 0.4603 | 0.8098 | 0.8791 | 0.8302 | 0.9341 | | 0.2608 | 3.0 | 1170 | 0.5284 | 0.8061 | 0.8713 | 0.8567 | 0.8863 | | 0.1577 | 4.0 | 1560 | 0.6398 | 0.8085 | 0.8749 | 0.8472 | 0.9044 | | 0.1577 | 5.0 | 1950 | 0.7089 | 0.8085 | 0.8741 | 0.8516 | 0.8979 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
armageddon/bert-large-uncased-squad2-covid-qa-deepset
39ceab4b47372274a9909c32343693915e3b7ed5
2022-03-01T09:03:30.000Z
[ "pytorch", "tensorboard", "bert", "question-answering", "dataset:covid_qa_deepset", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
question-answering
false
armageddon
null
armageddon/bert-large-uncased-squad2-covid-qa-deepset
4
null
transformers
19,083
--- tags: - generated_from_trainer datasets: - covid_qa_deepset model-index: - name: bert-large-uncased-squad2-covid-qa-deepset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-squad2-covid-qa-deepset This model is a fine-tuned version of [phiyodr/bert-large-finetuned-squad2](https://huggingface.co/phiyodr/bert-large-finetuned-squad2) on the covid_qa_deepset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: tpu - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
sancharidan/scibert_expfinder_SCIS
afb66ff5a7d902875a0e0a335e8664d46a3252ff
2022-03-01T07:06:10.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
sancharidan
null
sancharidan/scibert_expfinder_SCIS
4
null
transformers
19,084
Entry not found
Kevincp560/bart-base-finetuned-pubmed
782b3e093e90f15d9e677b61b0a417ec43dfef6c
2022-03-01T12:08:00.000Z
[ "pytorch", "tensorboard", "bart", "text2text-generation", "dataset:pub_med_summarization_dataset", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
Kevincp560
null
Kevincp560/bart-base-finetuned-pubmed
4
null
transformers
19,085
--- license: apache-2.0 tags: - generated_from_trainer datasets: - pub_med_summarization_dataset metrics: - rouge model-index: - name: bart-base-finetuned-pubmed results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: pub_med_summarization_dataset type: pub_med_summarization_dataset args: document metrics: - name: Rouge1 type: rouge value: 9.3963 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-pubmed This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the pub_med_summarization_dataset dataset. It achieves the following results on the evaluation set: - Loss: 2.0277 - Rouge1: 9.3963 - Rouge2: 4.0473 - Rougel: 8.4526 - Rougelsum: 8.9659 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 2.3706 | 1.0 | 4000 | 2.1245 | 9.1644 | 3.8264 | 8.2223 | 8.718 | 20.0 | | 2.2246 | 2.0 | 8000 | 2.0811 | 9.023 | 3.7716 | 8.1453 | 8.5998 | 20.0 | | 2.1034 | 3.0 | 12000 | 2.0469 | 9.4412 | 4.0783 | 8.4949 | 8.9977 | 20.0 | | 2.0137 | 4.0 | 16000 | 2.0390 | 9.2261 | 3.9307 | 8.3154 | 8.7937 | 20.0 | | 1.9288 | 5.0 | 20000 | 2.0277 | 9.3963 | 4.0473 | 8.4526 | 8.9659 | 20.0 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55
4fdaaf2517101ac528fd2dd095e11d53acdc281d
2022-03-01T12:20:35.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55
4
null
transformers
19,086
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7224 - Accuracy: 0.6979 - F1: 0.4736 - Precision: 0.5074 - Recall: 0.4440 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 95 | 0.6009 | 0.65 | 0.2222 | 0.625 | 0.1351 | | No log | 2.0 | 190 | 0.6140 | 0.675 | 0.3689 | 0.6552 | 0.2568 | | No log | 3.0 | 285 | 0.6580 | 0.67 | 0.4590 | 0.5833 | 0.3784 | | No log | 4.0 | 380 | 0.7560 | 0.665 | 0.4806 | 0.5636 | 0.4189 | | No log | 5.0 | 475 | 0.8226 | 0.665 | 0.464 | 0.5686 | 0.3919 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_2e-05_essays_01_03_2022-13_20_40
43779b35e181e3a8b6eeeb350c4daad5ed46d896
2022-03-01T12:21:10.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers" ]
text-classification
false
ali2066
null
ali2066/finetuned_sentence_itr0_2e-05_essays_01_03_2022-13_20_40
4
null
transformers
19,087
Entry not found
ali2066/correct_distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_42_32
3deaafce5526cd4493aff950e27cebcf0161bdc2
2022-03-01T14:43:43.000Z
[ "pytorch", "tensorboard", "distilbert", "token-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
ali2066
null
ali2066/correct_distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_42_32
4
null
transformers
19,088
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: correct_distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_42_32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # correct_distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_42_32 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1206 - Precision: 0.0637 - Recall: 0.0080 - F1: 0.0141 - Accuracy: 0.9707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 15 | 0.1222 | 0.12 | 0.0139 | 0.0249 | 0.9736 | | No log | 2.0 | 30 | 0.1159 | 0.12 | 0.0139 | 0.0249 | 0.9736 | | No log | 3.0 | 45 | 0.1082 | 0.12 | 0.0139 | 0.0249 | 0.9736 | | No log | 4.0 | 60 | 0.1042 | 0.12 | 0.0139 | 0.0249 | 0.9736 | | No log | 5.0 | 75 | 0.1029 | 0.12 | 0.0139 | 0.0249 | 0.9736 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
Ebtihal/AraBertMo_base_V9
544c64595b66e043797cce362e816d5870ae2416
2022-03-25T07:25:05.000Z
[ "pytorch", "bert", "fill-mask", "ar", "dataset:OSCAR", "transformers", "Fill-Mask", "autotrain_compatible" ]
fill-mask
false
Ebtihal
null
Ebtihal/AraBertMo_base_V9
4
null
transformers
19,089
Arabic Model AraBertMo_base_V9 --- language: ar tags: Fill-Mask datasets: OSCAR widget: - text: " السلام عليكم ورحمة[MASK] وبركاتة" - text: " اهلا وسهلا بكم في [MASK] من سيربح المليون" - text: " مرحبا بك عزيزي الزائر [MASK] موقعنا " --- # Arabic BERT Model **AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERTMo_base uses the same BERT-Base config. AraBERTMo_base now comes in 10 new variants All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name. Checkpoints are available in PyTorch formats. ## Pretraining Corpus `AraBertMo_base_V9' model was pre-trained on ~3 million words: - [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar". ## Training results this model achieves the following results: | Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss| |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:| | Fill-Mask| 30024| 9 | 64 | 4230 | 7h 57m 42s | 7.3264 | ## Load Pretrained Model You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V9") model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V9") ``` ## This model was built for master's degree research in an organization: - [University of kufa](https://uokufa.edu.iq/). - [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/). - **Department of Computer Science**
batterydata/bert-base-uncased-squad-v1
ced392697ea10dda819fbffacd1a66eb741ccfb5
2022-03-03T19:53:31.000Z
[ "pytorch", "bert", "question-answering", "en", "dataset:squad", "dataset:batterydata/battery-device-data-qa", "transformers", "question answering", "license:apache-2.0", "autotrain_compatible" ]
question-answering
false
batterydata
null
batterydata/bert-base-uncased-squad-v1
4
null
transformers
19,090
--- language: en tags: question answering license: apache-2.0 datasets: - squad - batterydata/battery-device-data-qa metrics: squad --- # BERT-base-cased for QA **Language model:** bert-base-uncased **Language:** English **Downstream-task:** Extractive QA **Training data:** SQuAD v1 **Eval data:** SQuAD v1 **Code:** See [example](https://github.com/ShuHuang/batterybert) **Infrastructure**: 8x DGX A100 ## Hyperparameters ``` batch_size = 32 n_epochs = 3 base_LM_model = "bert-base-uncased" max_seq_len = 386 learning_rate = 3e-5 doc_stride=128 max_query_length=64 ``` ## Performance Evaluated on the SQuAD v1.0 dev set. ``` "exact": 80.93, "f1": 88.20, ``` Evaluated on the battery device dataset. ``` "precision": 62.19, "recall": 75.00, ``` ## Usage ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "batterydata/bert-base-uncased-squad-v1" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'What is the electrolyte?', 'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Authors Shu Huang: `sh2009 [at] cam.ac.uk` Jacqueline Cole: `jmc61 [at] cam.ac.uk` ## Citation BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
aaraki/marian-finetuned-kde4-en-to-fr
ca878625037784a80830d46c1faa8b5e424c68e4
2022-03-02T01:54:57.000Z
[ "pytorch", "tensorboard", "marian", "text2text-generation", "dataset:kde4", "transformers", "translation", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
translation
false
aaraki
null
aaraki/marian-finetuned-kde4-en-to-fr
4
null
transformers
19,091
--- license: apache-2.0 tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-fr results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 args: en-fr metrics: - name: Bleu type: bleu value: 52.94560734092563 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8559 - Bleu: 52.9456 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
Akash7897/distilbert-base-uncased-finetuned-cola
e25f95dffc22db6cbe5102f5f59aeeba04e901b0
2022-03-02T08:29:47.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
Akash7897
null
Akash7897/distilbert-base-uncased-finetuned-cola
4
null
transformers
19,092
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.522211073949747 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.0789 - Matthews Correlation: 0.5222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.1472 | 1.0 | 535 | 0.8407 | 0.4915 | | 0.1365 | 2.0 | 1070 | 0.9236 | 0.4990 | | 0.1194 | 3.0 | 1605 | 0.8753 | 0.4953 | | 0.1313 | 4.0 | 2140 | 0.9684 | 0.5013 | | 0.0895 | 5.0 | 2675 | 1.0789 | 0.5222 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
luffycodes/reg-roberta-base-mrpc
18b1244164f24ba71b70794013788c2515378449
2022-04-06T03:32:27.000Z
[ "pytorch", "roberta", "transformers" ]
null
false
luffycodes
null
luffycodes/reg-roberta-base-mrpc
4
null
transformers
19,093
Entry not found
vuiseng9/bert-squad-nncf-qat-mvmt-hybrid-filled-lt
48eabb46113dfc7e8ed718bf365af798d5bfb650
2022-03-03T05:00:27.000Z
[ "pytorch", "onnx", "bert", "transformers" ]
null
false
vuiseng9
null
vuiseng9/bert-squad-nncf-qat-mvmt-hybrid-filled-lt
4
null
transformers
19,094
carolEileen/distilbert-base-uncased-finetuned-imdb
20abf2a876143f2452db0b25a53a7adac60636b1
2022-03-03T09:07:29.000Z
[ "pytorch", "tensorboard", "distilbert", "fill-mask", "dataset:imdb", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
carolEileen
null
carolEileen/distilbert-base-uncased-finetuned-imdb
4
null
transformers
19,095
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4725 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4897 | | 2.5756 | 2.0 | 314 | 2.4230 | | 2.5395 | 3.0 | 471 | 2.4358 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
Aktsvigun/bart-base_tapt_email1e4
cd664740e8f593d54705c116d64128709fea5e8f
2022-03-03T11:51:40.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Aktsvigun
null
Aktsvigun/bart-base_tapt_email1e4
4
null
transformers
19,096
Entry not found
Anthos23/FS-finbert-fine-tuned
b2153d532c679117d9074803bafb9fca4fec3810
2022-03-04T13:00:29.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
Anthos23
null
Anthos23/FS-finbert-fine-tuned
4
null
transformers
19,097
Entry not found
mmaguero/beto-gn-base-cased
fd977a4c505169e11f3ef07791a815e695c6d265
2022-03-06T08:06:11.000Z
[ "pytorch", "bert", "fill-mask", "gn", "es", "dataset:wikipedia", "dataset:wiktionary", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
false
mmaguero
null
mmaguero/beto-gn-base-cased
4
null
transformers
19,098
--- language: - gn - es license: mit datasets: - wikipedia - wiktionary widget: - text: "Paraguay ha'e peteĩ táva oĩva [MASK] retãme " --- # BETO+gn-base-cased [BETO-base-cased (pre-trained Spanish BERT model)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) fine-tuned for **Guarani** language modeling (Spanish + Guarani). Trained on Wikipedia + Wiktionary (~800K tokens).
mmaguero/multilingual-bert-gn-base-cased
10ac6d4faec3774cacff34af52e917fd030828fc
2022-03-06T08:08:41.000Z
[ "pytorch", "bert", "fill-mask", "gn", "multilingual", "dataset:wikipedia", "dataset:wiktionary", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
false
mmaguero
null
mmaguero/multilingual-bert-gn-base-cased
4
null
transformers
19,099
--- language: - gn - multilingual license: mit datasets: - wikipedia - wiktionary widget: - text: "Paraguay ha'e peteĩ táva oĩva [MASK] retãme " --- # mBERT+gn-base-cased (multilingual-BERT+gn-base-cased) [BERT multilingual base model (cased, pre-trained BERT model)](https://huggingface.co/bert-base-multilingual-cased) fine-tuned for **Guarani** language modeling (104 languages + gn). Trained on Wikipedia + Wiktionary (~800K tokens).